Quotes of the week – hardware vs. software

… it looks more or less like the hardware designers have run out of ideas, and that they’re trying to pass the blame for the future demise of Moore’s Law to the software writers by giving us machines that work faster only on a few key benchmarks!

Donald Knuth (Quoted here)

Computer science is no more about computers than astronomy is about telescopes.

Edsger Dijkstra (although disputed)

Real computer scientists despise the idea of actual hardware. Hardware has limitations, software doesn’t. It’s a real shame that Turing machines are so poor at I/O.


Much as I hate to disagree with Donald Knuth, I think he is being overly harsh to the hardware manufacturers (at least on this particular topic). Chips are now being produced with 14 nm features. Chip designers are faced with some fundamental limits of the universe: quantum effects and the speed of light matter at this scale. They are also having to deal with billions of transistors on a chip. I am always amazed that anything computer based works at all.

I used Donald Knuth’s quote at a talk Chuck Rose and I gave at a domain specific languages conference a few years ago. The talk can be found here. It goes into detail about Pixel Bender, a high performance language for image processing (i.e. very fast, very cool special effects on images), that can use as much parallelism as is available.

As usual, Dijkstra makes the rest of us look like we are banging our heads against the keyboard and accepting whatever results as a valid program. The University of Texas maintains an archive of Dijkstra’s manuscripts. (See also a previous quote of the week).

And finally, while “real computer scientists” might “despise the idea of actual hardware”, some of us have to deal with actual hardware to make a living. I am immensely grateful for the fact that we have top notch thinkers like Knuth and Dijkstra, and I am also grateful that we have engineers at the chip companies doing the hard work necessary to give us processors that work.

3 thoughts on “Quotes of the week – hardware vs. software

  1. I think what Knuth means here is that the speed of computers is the only thing that’s important (the “key benchmark”), while other ideas and methodologies (like parallelization) get much less attention from hardware manufacturers. Sure is 14nm a big feat, but we should explore other fields than miniaturization!

  2. John Payson says:

    IMHO, what’s needed is for hardware vendors and compiler/framework developers to work together on ways to make hardware more aware of certain kinds of data structures. Although most C systems regard memory as a uniform linear blob, modern frameworks like Java and .NET regard it as mostly holding a bunch of discrete objects. Today’s hardware, however, is generally agnostic to how memory is used and makes no effort to exploit the discrete-object usage pattern.

    If members of arrays will always be accessed as offsets from a base register which identifies the array as a whole, methods will typically load a base pointer for an array and then access many elements thereof, and methods running on various threads will *mostly* be accessing different arrays, hardware shouldn’t need to have to test every array-element access for caching conflicts. If two cores have loaded different values into the address-base registers, then all legitimate access using those registers will access disjoint array elements, so testing on individual accesses wouldn’t be necessary. Only when two cores have the same base address loaded would finer-grained arbitration be necessary.

    Garbage-collector aware hardware could also greatly help with concurrent garbage collection by automatically tagging objects which have been written since the last time the GC examined them. Using eager atomic read-modify-write operation to test and set the tag bits would be dreadful for performance, but specialized hardware could implement a deferred means of setting the bit without needing the same sort of atomic access.

    In a memory model where read and write barriers are global, the relative cost of such barriers will likely only increase with time. Having hardware and software coordinate finer-grained control should reduce such costs.

  3. This totally reminds me of an article I read recently about highway congestion. The basic premise of the research was that, no matter how much or little new road bandwidth you add to a city, the overall traffic congestion stays about the same. I think there’s a “law” of some kind that says something like “as efficiency increases, so too does utilization”. People are just greedy, I guess. I’m less gloomy. It’s taken both chip power and really smart software (let alone the electronics of networking) to give us things like automated build and testing, distributed code repositories, distraction free backup, user-proof tablet OSes, etc. Some of these things really are general improvements to computing, I’d say.

Leave a Reply

Your email address will not be published. Required fields are marked *