Two things reminded me about computer responsiveness recently, one from a long time ago and the other just today. Over 30 years ago, I used to program real-time flight simulators being responsible for fuel, electrical and hydraulics systems. There was no operating system to speak of, just an ‘executive’ program that controlled the time-slices in milliseconds that each program had to compute its results within. If you didn’t finish within your allocated time-slice too bad; you were just cut off. So if your fuel calculation wasn’t completed and the fuel gauges weren’t updated, the engines never ran out of fuel and the aircraft weight remained constant. Ultimate responsiveness, you moved the flight control and the aircraft moved precisely as it should in the real world without any hesitation. No mice to click on, or cursors to watch, it just happened!
I had to edit a home video today. I chose my powerful gaming computer but quickly realised that for all its powerful processor, great memory capacity and cavernous hard disk the editing session was stuttering and painful. Something was going wrong here, but I couldn’t be bothered to find out what it was. I suspected the video card drivers, but they worked well with the 3D games I played, so I was reluctant to change them. I moved the video clips to the small HP server (see previous blogs) and sure enough every thing worked quickly and responsive. Even though the system specification was below the gaming system, apart from the striped RAID hard disks.
In both these examples it was the environment and the task that determined the responsiveness of the system, not the performance and capabilities. So why are some systems so much more responsive if raw performance and resources have nothing to do with it? And how can you measure this?
Saturday, July 12, 2008
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment