[koko] MISP 2022

“Things Have Changed” – Bob Dylan (1999)

“Everything, still remains the same”
“(Sittin’ On) the Dock of the Bay” – Otis Redding (1967)

tl;dr embarking on quantifying meaningful indicators of small computer system performance over the last three decades

In graduate school and subsequent professional work, analyzing performance of computer systems was often my primary effort, including much of the software I wrote, and my first three books. Years ago I pontificated about Meaningful Indicators of System Performance, surveying the (mostly) synthetic benchmarks in vogue in 1990. I was also active in the formation of the Business Applications Performance Corporation.

Those interests have mostly taken a back seat in the last three decades. Recently, I was remarking to a friend, also an expert in systems performance, how my new Raspberry Pi Zero 2 W seemed so much faster than the original Raspberry Pi Zero W I had been using, that browsing with Chrom(ium) in the Zero 2 was viable in a way it is not with the original. He asked me to quantify the difference but I couldn’t at the time. That has spurred me to embark on trying to meaningfully characterize performance of the small computers I’ve used since 1991.

three decades of computers

          Some of my three decades of computers,
          iMac in the upper left, three Raspberry Pi Zeroes circled in red

Rigorous success seems impractical if not impossible:

  • The range of performance from 1991 486 machines to the best of current personal machines is large enough to challenge older methods from being meaningful on modern machines and to prevent modern methods from being functional on older machines.
  • As has long been the case, methods are controversial, with advocates disparaging competing alternatives.
  • Focus on specific system aspects, i.e., processors, memory, and storage systems, doesn’t necessarily correspond to user experience.
  • I plan to minimize expenditures (many benchmarks, e.g., SPEC and BAPCo, are products for sale).

Much of what I have done so far is inventorying what I have in hardware and software, surveying what seems to be modern practice, and seeing what published results are available.

I’m pursuing two tracks:

  1. processor oriented benchmarks, primarily based on SPEC89, since I still have a copy I purchased in 199x, and that may still be meaningful on modern machines, and
  2. browser-based benchmarks, since so much of modern computing is browser-based, and even the 1991 486 machines can run browsers.

I hope to run at least SPECint89 on all the target machines in my collection, from 486 forward. SPECfp89 requires Fortran — I’m pessimistic that I can get useful Fortran compilers for many of the older machines.

The more modern the browser benchmark, the less likely it is to run on older machines. I have cursory results from the “obsolete” SunSpider for Chrom(ium) on a range of machines:

M1 MacBook Pro:         189.9ms
i5 MacBook Pro:         253.3ms
i5 Windows10-32:        307.6ms
i7 Windows10-64:        308.4ms
Raspberry Pi 4:         885.3ms
*Pentium 4 3GHz XP:    1105.6ms
Raspberry Pi Zero 2 W: 2540.5ms
Raspberry Pi Zero W:  20501.9ms

*Chrome 49 and IE 8 on XP failed with SunSpider. Results are from Firefox 52. Older browsers on other machines seem to fail for various reasons, so I’m hoping I to find an older browser benchmark that might still be useful. [update 10/31/23: The above numbers are from the original site, which is no longer available. I have created a nominal clone at https://technologists.com/SunSpider/SunSpider1.0.2.html, but it appears that the numbers from the clone might be significantly lower than those from the original site. It could be the lower numbers are because of changed in Chrom(ium). Also, see SunSpider 1.0 JavaScript Benchmark for JS, which seems to show even lower numbers than the nominal 1.0.2 clone.]

So far, the biggest surprise, but not a surprise based on prior experience, is the order of magnitude difference between Pi Zero and Pi Zero 2. The Windows i5 is four years older than the MacBook Pro i5, so MacBook Pro being faster is not a surprise. The Windows i7 is also from 2011, and doesn’t feel any faster than the Windows i5, so the numbers being the same makes sense. I thought the M1 MacBook Pro might have done even better than the others. My friend pointed out that Safari would likely be faster than Chrome on macOS and it is, i5 Safari 151.4ms, M1 Safari 115.6ms. M1 Firefox is also faster, 159.0ms. My friend has reported even better numbers from Safari on a well-configured i7 Mac Mini and Safari on an M1X.

The next step seems to be trying SPEC on the range of machines. I’m preparing a testing framework, beginning with GNU and LLVM C compilers on current Fedora, since those supposedly can coexist, and am hoping that coexistence will also work with GNU and LLVM Fortran.

Comments are closed.