I noticed that when I started to perform code optimization of the 0.86 code, it would start to just freeze up. I haven't yet figured out why, though, as I had already progressed on more-or-less re-writing the 0.86 code. That code is nice and rock-solid stable.
Quote:
Originally Posted by nickdigger
Here's how I optimized my microseconds():[/code]
|
Hey, that's pretty cool. I might try that, too, at some point.
Are you calling your microSeconds() function in your ISRs? If you are, you might want to slightly modify your interrupt disabling mechanism a bit. Something more like:
Code:
ulong microSeconds (void)
{
union32 tmp;
const byte *t2ocptr = (byte *)&timer2_overflow_count;
byte oldSREG = SREG; // save interrupt flag status (in case an ISR called here)
cli(); // now, we disable interrupts
//tmp.ul = timer2_overflow_count<<8;
//tmp_tcnt2 = TCNT2;
tmp.b3 = *(t2ocptr+2);
tmp.b2 = *(t2ocptr+1);
tmp.b1 = *(t2ocptr);
tmp.b0 = TCNT2;
SREG = oldSREG; // restore previous interrupt flag state
// sei();
//return (tmp + tmp_tcnt2) * 4;
return tmp.ul * 4;
This way, you're not inadvertently re-enabling interrupts while servicing an interrupt. Nested interrupts a a total PITA at best.
Remember also that if you happen to read TCNT2, it might roll over and overflow right before you read it, thus giving a false lower value. This is why it's generally better to read the value, check for overflow condition, then read again and adjust for the overflow if the overflow condition is found to exist. All this, of course, is done while interrupts are disabled. I strongly suspect this is why the original microSeconds() function had a recursive call, to try to get around this quirky aspect of timers.