Bob-clock v2.0 circuit

This page describes the circuit used in the v2.0 HW for the Bob-clock. This design was mostly SMD (except the LEDs, of course). Apart from changing all components to through-hole technology when making the v2.1 hardware design, a lot of other changes were made to the electronics interfacing, so this page serves to describe the old hardware, which used a level shifting technique to drive the LEDs at one voltage, while the micro controller was operating at another voltage.

NOTE: This page is for informational purposes only. If you want information about how the new circuit works, you should check this page.

LED matrix and driver circuit

If you take a look at the schematics for the LEDs, you can see that they are logically arranged into an 8 by 8 matrix, giving 8 x 16 connections (since each LED has one cathode and two anodes). The 8 cathode connections (rows) are connected to N-channel FET drivers (through a current limiting resistor), which are controlled directly from PORT C of the microcontroller.

To drive the anodes (columns), there are 16 P-channel FET drivers. Looking at the schematics for the main controller you will notice that these FETs use the unregulated power supply (to avoid driving the LEDs off of a regulated supply). This also means that the gate-voltage must be at this level (about 7V) for the FETs to be off. In order to achieve this, a 4515 is used as a multiplexer for selecting the appropriate column in the LED matrix. The 4515 has inverted outputs, which are ideal for driving the FET gates, and since the 4515 is also supplied from the unregulated power supply (Most 40-series CMOS logic will work with supply voltages in the range 3V-15V), the levels are fine for this.

In order to interface to the microcontroller, some inverters are made with another group of N-channel FETs (and pull-ups). This gives the needed level-shift. Notice how the numbers on the row-drivers are opposite to the outputs of the 4515. With this configuration, the outputs from the microcontroller (PB0-PB3) that are inverted by the level-shifters will give the correct output (as if they had not been inverted). When I made the v2.1 hardware I actually forgot that I had removed the inversion, so the LED groups had to be addressed in a different way. Luckily this was easily fixed in the firmware (so it is now a compile-time option, for backwards compatibility).

The configuration means that the 4515 should first be used to select a group of 8 diodes and a color (e.g. one of the columns), by means of the 4 data-bits and the strobe signal. Then these 8 LEDs can be controlled by the 8 row-drivers on PORT C.

The firmware divides the time equally among the 16 columns, so when an LED is lit, it will still only be on for 1/16th of the time. The LEDs used in the first v1.0 HW were not that bright, and the driver circuit had multiplexers on both rows and columns, so only one LED could be turned on at a time (compared to 8 with the current design). Also, the driver circuit was not quite as powerful, so in that version I switched on only the LEDs needed to display the time (one for the hours, one for the minutes), resulting in a 50% duty cycle. So the new circuit has a draw-back that it can only use 1/16th duty cycle, but this also means that each of the LEDs are individually controllable, so it is actually possible to light every LED. This gives a lot of possibilities for fancy display modes in the firmware.

LED matrix PCB layout

Of course the LED matrix is not physically positioned in a matrix formation, but rather in a circle (with the last four LEDs being left out). Check out the PCB artwork to see how this was accomplished. The top layer is pcb-top.pdf (195kB PDF) and the bottom layer is in pcb-bot.pdf (250kB PDF).

On the top layer, the LED anodes are connected in groups of 8, each going to one of the P-channel FETs driven by the 4515 outputs. These are actually the columns in the schematics for the LED matrix.

The bottom layer is more complex, with the eight matrix rows being routed as 8 parallel tracks around the circle of the clock. You can see how the connections from the 8 current limiting resistors are routed to the common cathode (center) pin of the first 8 LEDs. From there the circuit trace continues, running in parallel around the circle, and shifting one position for each LED. After each group (i.e. all 8 row signals are connected to a LED in this group), the 8 parallel tracks all shift back and the same pattern starts over for the next LED group.

The entire PCB design was routed manually, mainly to have complete control over the placement of the tracks (especially the LED matrix). Another reason was that I was going to be making the PCBs with the normal photo-process, and so I would not be able to get it plated through. By doing manual routing I was able to avoid vias.

Clock options

The v2.0 HW originally used the main MCU clock for the RTC. The main CPU clock comes from XT1, which is a 14.7456MHz crystal. Unfortunately I did not know much about the load capacitance of the used crystal (nor about the theory of how an incorrect load capacitance affects the generated clock). The result was that the clock would drift a little. Maybe only a couple of minutes per month, but still enough to be noticable, and a bit annoying.

I then thought that I could get more accurate results using a watch crystal connected to the TOSC-inputs (PC6 and PC7). Of course this meant that I had to move to of the row driver outputs to a different port, so writing to the LEDs became a little more complex in the firmware. This should have resulted in more accurate RTC timing, because I now got rid of the load capacitance uncertainty, but unfortunately it turned out that the clock was still drifting. I never really figured out what the reason was (perhaps the internal AVR load capacitance still did not match the 32kHz watch crystal).

I then decided to try tuning the clock in the firmware. This required a clock reference (I used the clock on the television), and a decent amount of patience. What I did was set the clock to the reference (to seconds accuracy), and then wait for about a month. Then I checked the reference clock again, figuring out how many seconds it drifted, and also how many seconds had passed since I set the clock. Dividing these two numbers, I now had the number of seconds that should pass before adding or subtracting a "leap-second". This was implemented in the firmware as a counter that would simply count the number of seconds, and when the set number of seconds had passed either add or subtract one (adding or subtracting was of course based on whether the clock drifted ahead or behind). The nice thing about this approach is that the drift is pretty small to begin with, which means you get very good resolution.

Of course this looked a bit strange after I implemented the clock displaying seconds. All of a sudden the clock would skip one second ahead (OK, I guess nobody would notice, because this happened so rarely, but I knew it was happening occationally, so I came up with a better method). To fix this I implemented sub-seconds for the leap-counter. In other words I would internally keep time in 1/64th of a second, and also use this for the leap-counter. So the counter would expire 64 times as often, but then I would adjust the time by only 1/64th of a second. The leap-counter calibration is in my 2.0 firmware, and will be merged into the 2.1/2.2 branch as soon as I get the time.

NDG3001B VCTCXO
The VCTCXO used for the Bob-clock

Then I finally found a better way to fix the problem, which would not require calibrating the clock: I bought a bunch of VCTCXOs off of eBay. A VCTCXO is a Voltage Controlled Temperature Compensated Crystal Oscillator. In other words this is still based on a quartz crystal, but since we know that the oscillation frequency of such a crystal varies with changing temperatures, the VCTCXO measures the temperature and uses this to adjust the frequency output of the crystal (how this works in detail is beyond the scope of this article, but the bottom line is that we get much more accuracy out of the VCTCXO than we would with a crystal). The VCTCXOs I found are 12.8MHz types, so the firmware was adjusted to use this for the main clock, and also use it for the RTC. Using a VCTCXO is definitely the easiest option, since it gives a much more accurate clock, without the need for calibration. But the VCTCXOs are also a bit expensive, and could be harder to get, so if building a Bob-clock, you might want (or be forced) to use a regular crystal, and do the calibration as described above.

Last updated: 2011.11.16