Wednesday, March 14, 2018

Frames of Reference

Lightning strikes the houses at either end of your street. As your neighbor at one end relates it, lightning struck his end of the street first, then the other end. Your other neighbor who lives at the other end says just the opposite: lightning struck his end first, then the other end. You live in the middle, and you saw it strike both ends simultaneously. Who's right?

You all are. That's what Einstein was getting at in the Special Theory of Relativity: when all observers are at rest with respect to the world around them, there is no preferred frame of reference. None of you are more correct than the other. Under no circumstances can information travel faster than the speed of light. So our perception of the order of events that occur in the world around us depends upon our position within the world and the speed at which we are traveling. There is no right answer. Or at least, none that is any more right than another.

As anthro-astronomer Anthony Aveni points out in his book Empires of Time [University Press of Colorado, 2002], on timekeeping across cultures and throughout history, even if you were unfortunate enough to have your eyeball pressed right up against the place on the roof where the bolt of lightning struck, there is still a non-zero latency for the light to reach your eyeball, to be converted into nerve impulses, for those impulses to travel to your brain, for them to be interpreted by your conscious mind, and for your mind to order you mouth to say "HOLY CRAP!" Any sense of simultaneity is completely subjective.

And so it is with the real-time systems we build. I've worked on all sorts of stuff over my long career: ATM-based optical networking for internationally distributed telecommunication systems; GSM and CDMA cellular base stations; Iridium satellite communication systems for business aircraft; Zigbee wireless sensor networks; RS-485 connected industrial lighting control systems, to name just a few. Plus a lot of the more typical Ethernet, WiFi, and LTE communications systems designed to tie together a variety of computers. These systems all suffer from the same lack of preferred reference frames that we do.

Each of these systems can be thought of as state machines that receive stimuli from many different sources and through many different channels. Vastly complex state machines for sure. But ultimately we can consider of their entire memory as one gigantic number - a Gödel number - perhaps billions of bits in length, each unique value encoding a specific state. Each real-time stimulus causes this enormous number to change, transitioning the state machine to a new state. Because there is a finite amount of memory in the system, there is a finite (albeit really really big) number of unique states that the system can assume.

The stimuli to this state machine may arrive via a variety of mechanisms. Network messages from other nodes might arrive via Ethernet, WiFi, and cellular networks. Sensor values might be read via various hardware schemes like SPI, I2C, GPIO, ADC, UART, each of which has its own bandwidth, and more importantly, latency. The hardware latency for each of these technologies varies widely. Some, like the analog/digital convertor, have hardware latency quite long relative to other devices. Plus all the software processing latency on top of it for whatever protocol stack is used.

In addition to all of these stimuli that are the results of events occurring in the outside world, stimuli generated internally are arriving all the time as well. Peruse any network standard and you'll find a slew of timers defined, that are set when certain conditions are met, cancelled if others are met, and when they fire, they inject their own stimuli into the state machine.

Once received, each stimulus has to be serialized - placed in some discrete order of events relative to all of the other stimuli - to be injected into the state machine so that the system acts deterministically.

If two companies develop competing products to solve the same problem, they will likely make different hardware design choices and write different software. Their products will implement different hardware and software paths, even if they use the same networks and sensors. The real-time stimuli that gets injected into their state machines will, at least occasionally, be serialized in a different order. Their systems will transition to different states. Because of that, they may make different decisions, based their own unique but different perception of reality. Which system is right?

Maybe they are all right. Even when the system responds in a way we think is incorrect, we may still be basing that assessment on our own perception of what happened in the real-world, which may or may not be any more or less correct than what the hardware and software system perceived. We may believe that our own reference frame is the preferred one, but both we and our cyber counterparts are subject to the same laws of Special Relativity as applied to biological and silicon systems.

I think about this a lot when I read articles on driverless vehicles. Vehicle automation doesn't have to be perfect. It just has to be better than the human behind the wheel. Or the joy stick. That may be a low bar to hurdle. There are still likely to be cases where the vehicle automation system makes a decision different from the one we would have made ourselves. That doesn't make it wrong. But it might be hard to understand why it did what it did without a lot of forensic log analysis.

My own Subaru WRX has an EyeSight automation system that includes features like collision avoidance, lane assist, and adaptive cruise control. EyeSight gets concerned when I drive through the car wash. It sometimes loses track of the road completely in heavy rain or snow. But for the most part, it works remarkably well. I don't depend on it.

Just this past weekend, Mrs. Overclock and I celebrated our thirty-fourth wedding anniversary in Las Vegas, a short plane ride from our home near Denver. We witnessed the eight-passenger driverless shuttle, part of a pilot program, cruising around the downtown area near Fremont Street. We saw it hesitate while pulling away from a stop because of oncoming traffic. Its full load of passengers weren't screaming in terror.

We also saw an empty and unmanned Las Vegas monorail train blow through a station where we were waiting to the sound of a very loud bang and a big shower of sparks. This resulted in someone with a walkie talkie walking the track, and eventually an announcement of a service disruption with no estimated time of repair. On the cab ride - plan B - we noticed another train stuck at a earlier station waiting for service to resume.

The Denver-area light/commuter rail was due to be extended to our neighborhood nearly a year and a half ago. The rail line is complete, and all the stations ready with their brightly lit platforms and scrolling electronic signs. But the developers can't get the positive train control to work as required by federal regulations; the automated crossing gates apparently stay down about twenty seconds too long. Sounds minor, but this defect - which has to be a lot more complex than it sounds or it would have been solved long ago - has cost millions of dollars, not to mention keeping Mrs. Overclock and me from taking the train from the station at the end of this line, which is within walking distance from our suburban home, to downtown Denver for the theatre and concerts we regularly attend, or even to the Denver International Airport by changing lines downtown.

Do any of these automated systems suffer from a disparity in reference frames with their users? Dunno. But in the future when I work on product development projects with a lot of real-time components (which for me, is pretty much all of them), I'm going to be pondering even more the implications of our hardware and software design decisions, how they impact how the system responds to events in our real-world, and how I'm going to troubleshoot the system when it inevitably makes a decision that its user finds inexplicable.

Wednesday, January 10, 2018

Windows Update on Windows 7 after December 3rd 2017

Tried to run Windows Update on a Windows 7 system after December 3rd and got an error message that it can't check for updates because the service isn't running? Yep, me too. If you search for this, you'll find we're not alone; probably millions of folks got caught by a botched timestamp buried in a file on a compressed archive deep in the Windows file system.

What the web suggests and what worked for me: set your system clock back to before the expiration date (I used December 1st), run Windows Update (it downloads updates that include a fix for this), let it restart your system, then maybe try running it again (because more updates showed up for me and I had to restart again), and then you can change the date back to today. Windows Update works afterwards.

Why Windows 7? Some proprietary embedded tools I must use from time to time - mostly for talking to hardware devices - haven't made the jump yet to Windows 10, or even Windows 8. Also, it doesn't suck.

Wednesday, November 08, 2017

The Kernel

(photo credit: Dale Courte)

The Kernel - which is the only name it ever had, long predating the use of the term in the context of Linux - was a tiny little real-time operating system kernel written in PDP-11 assembler language circa 1980 by Dayton Clark when he was a graduate student in computer science at Wright State University. The Kernel was used in the infamous CS431 Advanced Programming course, which later became CEG431 Real-Time Software Design when the degree program split into seperate Computer Science and Computer Engineering majors. CS431 was developed by Wright State professors Bob Dixon and Joe Kohler. Over the span of the ten week quarter, students had to complete a complex multi-threaded application which used the Kernel to synchronize and communicate among several threads, each of which controlled interrupt driven I/O devices, some of which supported DMA. It was all written in PDP-11 assembler and ran in a total of eight kilobytes of core memory on a PDP-11/05. The course culminated in an oral exam featuring a dump analysis. You had to pass the oral exam to pass the course.

CS431 was required for undergraduate computer science/computer engineering majors to graduate, and for graduate students to take upper division graduate courses. Students either loved it or hated it. Area employers, who for the most part were heavily into defense-related embedded development in the neighborhood of the ginormous Wright-Patterson Air Force Base, loved it. And perhaps for that reason, some students came to love it in retrospect.

Around 1982 through 1985, the code was substantially cleaned up, refactored, and documented by John Sloan and David Hemmendinger, both of whom, along with Dale Courte and others, all graduate students, went on to teach the course. The kernel was split into two source files, the pure code System part, and the impure data Control Block part, to expedite burning the pure portion into EEPROM. It supported process creation and destruction, counting semaphore wait and signal, and asynchronous message passing. It did so in only a handful of PDP-11 machine instructions.

Besides being used pedagogically, the Kernel made its way into a number of other research projects, including SLICK (which supported message passing among a network of LSI-11 microprocessors), TASK4TH (a FORTH-based LSI-11 workstation for real-time data acquisition and robotic control), and FPS (a functional programming system). The Kernel was ported to a number of other microprocessor targets, re-written in C and C++, and, rumor has it, used by former students in their own production systems in the commercial and defense sectors. My alter ego John Sloan wrote a technical report describing the Kernel in detail, as well as suggesting techniques for debugging embedded applications that use it, which was always a challenge.

It is tempting to say that the Kernel is of historical interest only. But tiny little microkernels taking only a few machine instructions have come into vogue as minuscule microcontrollers become embedded in everything. The Kernel was more than a tool used to teach a class. It has informed my career for the past thirty-five years.
(2017-11-08: To facilitate the efforts of far future - and, it must be said, extremely hypothetical - cyberarcheologists, the source code for the Kernel, the technical report, and other related material, have been stored in a repository on GitHub: .)