FPGARelated.com
Forums

Slightly unmatched UART frequencies

Started by valentin tihomirov November 25, 2003
UART is used to transfer a byte in serial form bit-by-bit. I know that 10%
deriviations in frequencies of transmitter and receiver are permissible. I
was learnt that UARTs synchronyze at the falling edge (1to0) of start bit;
hence, there should allow for transfer of a stream of bytes of arbitrary
length.

I have developed a simple UART. It's receiver and transimtter run at 9600
bps with 16x oversampling. Both receiver and transmitter have 1-byte buffer.
To test the design I've created an echo device; it merely mirrors all the
bytes sent to it back to the sender. It works fine with one of COM ports on
my PC. Another COM port has its crystal running at a bit faster fundamental
frequency. This causes a problem when it sends a long stream of bytes to my
UART. In fact, sender and recepient cannot synchronize at falling edge of
start bit because one of them is slower and is processing a previous byte
wrile sender proceeds to next byte transmitting start bit. Despite of the
fact, my receiver still works fine beacuse it is ready to receive next byte
right after a first half of stop bit is received. Just to clarify, receiver
acquares values from serial input at the middle of each data bit slice; it
reports BYTE_READY in the middle of stop bit and from this moment is ready
to accept next byte, i.e. ready fror synchronization. Therefore, if data is
coming slightly faster and falling edge of start bit is located within stop
bit (according to my UART's clock) receiver is still capable not to overlook
the data.
On the other hand, transmitter should transmit all 10 bits (start + 8 data +
stop) @ 9600 bps. Consider for instance an UART forwarder or an echo device.
If data is coming faster than I forward it I get a buffer overrun
ultimately. That is, receiver is ready with a byte in its buffer to be
copied into transmitter to forward but slow transmitter is still shifting
data out and its buffer is blocked.
I have a "fast" solution for my UART echo device; if transmitter has
transmitted > half of stop bit and sences that there is a next byte received
it stops sending current stop bit and starts transmitting a start bit for
next byte. Untimely ceasing transmission is not good solution because
transmitter may be connected to a good matched or slightly slower running
UART. Design may be not a forwarder thus data provider may differ from 9600
bps receiver. In this case, starting early transmission of next byte while
remote peer is still receiving stop bit causes stop bit error.

What is interesting in this situation is the fact I can build a good echo
device from any industrial manufactured UART (I've used standalone 16c750
and ones built into i8051). They never have a buffer overrun issue despite
sending port is slightly faster than receiving (like sending data from my
fast COM port to slow one). Note, no flow control is used, buffers are
always 1-byte long. Which trick do they use? Again, 10% frequency
dereviations between sender and receiver are considered permittable and no
flow control is not required since sender and receiver both run at formal
9600bps.

I feel this should be a well-known problem (solution) and just wonder why I
did not encounter this consideration before.

Thanks.



valentin tihomirov wrote:
> > UART is used to transfer a byte in serial form bit-by-bit. I know that 10% > deriviations in frequencies of transmitter and receiver are permissible. I > was learnt that UARTs synchronyze at the falling edge (1to0) of start bit; > hence, there should allow for transfer of a stream of bytes of arbitrary > length. > > I have developed a simple UART. It's receiver and transimtter run at 9600 > bps with 16x oversampling. Both receiver and transmitter have 1-byte buffer. > To test the design I've created an echo device; it merely mirrors all the > bytes sent to it back to the sender. It works fine with one of COM ports on > my PC. Another COM port has its crystal running at a bit faster fundamental > frequency. This causes a problem when it sends a long stream of bytes to my > UART.
When you say "your" UART, is this a design you did yourself in an FPGA? If so you may not have designed the logic correctly. In order for the receiver to synchronize to a continuous data stream, it has to sample the stop bit in what it thinks is the center and then *immediately* start looking for the next start bit. This will allow a mismatch in speed of almost a half bit minus whatever slack there is for the sample clock rate. BTW, you are sampling at at least 8x the bit rate, right? The max mismatch is not 10%, but a bit less that 5%. In the field I find that 2 to 3% mismatch is reliable, but any more and you can start getting errors. I guess the difference in theory and practice is perhaps skew caused by the drivers. Does this make sense? -- Rick "rickman" Collins rick.collins@XYarius.com Ignore the reply address. To email me use the above address with the XY removed. Arius - A Signal Processing Solutions Company Specializing in DSP and FPGA design URL http://www.arius.com 4 King Ave 301-682-7772 Voice Frederick, MD 21701-3110 301-682-7666 FAX
> When you say "your" UART, is this a design you did yourself in an FPGA?
Yes, you're right I have my design runs on CPLD. However, the qestion is more in logic rather than implementation. The value 10% I have got from www.8052.com's forum where "Software Uart" is a hot topic.
> If so you may not have designed the logic correctly. In order for the > receiver to synchronize to a continuous data stream, it has to sample > the stop bit in what it thinks is the center and then *immediately* > start looking for the next start bit. This will allow a mismatch in > speed of almost a half bit minus whatever slack there is for the sample > clock rate. BTW, you are sampling at at least 8x the bit rate, right?
I use 16x oversampling and check input values at middle of a bit (SampleCnt = 7). You suggest exactly what I have done. I think receiver part will work under any condition. But I need to know what should I do with transmitter module. As I attempted to explain, this half-bit solution cannot be used to synchronize transmitters. It is a bad idea to start transmitting next byte at the middle of the stop bit. It may fail listening device with slow clock as it reaches center of stop bit when start bit of next byte is being transmitted. On the other hand, if data is coming slightly faster transmitter should do something, otherwise I face buffer overrun condition. I understand that I can ignore the problem with transmitter module, it is receiver that should synchronize with transmitter. However, I had got buffer overrun problem until I used a trick described in my message (early transmit). It is defenetely not the problem with receiver because I have solved it right before got problem with transmitter's buffer overrun. And I want to know how should function correct logic; there should be a solution as commertial UARTs work without any problems. My UART is the first one where I've realized that it is at all possible to get a problem with slowly transmitting uart. Is now the problem become clearer?
Valentin,

You bright up a good subject, and you're absolutely correct that if you
continuously send data from one serial port at 9600.01bps to a receiver at
9600, sooner or later there must be a buffer overflow.  There's no way
around this -- but keep in mind that RS-232 (or most any protocol, for that
matter) isn't designed to send a truly continuous stream for days, months,
or years at a time without a break!  With a typical RS-232 device, there are
MANY breaks, and keep in mind that something like a PC often has a pretty
generous software buffer (many kilobytes) backing up the hardware so that it
would take a 'long' time to create an overflow.  I can't explain your
observation that an, e.g., 8031-based data forwarder -- supposedly -- works
other than to say I suspect that perhaps you didn't really do the type of
torture test that could produce an overrun.  (I.e., did you look with a
scope or logic analyzer to make sure there was NEVER a idle bit time that
might have allowed the receiver to 'catch up'?)

One solution that you can use for protocols such as 8B/10B -- where you get
a bazillion data bytes interspersed with an occasional 'comma' character -- 
is to use a form of compression where you assign 9 bits to ever received
byte and 'swallow' any comma you get by setting the high bit.  You can then
sit down and work out how often you need to insert a comma into your
bitstream to avoid buffer overflow.  We had a gigabit fiber interface that
used this approach, and with a 16 byte FIFO for buffering and a +/-100PPM
clock at 1.0625Gbps, the numbers worked out to many thousands of bytes
before overflow would be a concern.

To build a data repeater that never suffers from potential buffer overflow
under any circumstances whatsoever, I don't think you have many options
other than locking the re-transmit bit rate to that of the received data
using, e.g., a PLL.  You could try this in software as well, I suppose, if
you have a bit rate generator that's 'finely tunable' -- most in
microcontrollers aren't!

A few other comments:

> UART is used to transfer a byte in serial form bit-by-bit. I know that 10% > deriviations in frequencies of transmitter and receiver are permissible.
'Permissible,' yeah, I suppose, although 10% wouldn't be anything to write home about!
> I have developed a simple UART. It's receiver and transimtter run at 9600 > bps with 16x oversampling.
Nothing wrong with 16x oversampling (it will definitely help -- a little), but keep in mind that you _can_ get away with no oversampling at all and get quite reasonable results if you position the sample point at the middle of each bit interval. ---Joel Kolstad
Valentin, <p>apparently you are trying to resolve the clock difference by cutting the stop bit in order to achieve a higher transmission rate. 
<BR> 
That is a very nice idea and it is completely wrong. 
<BR> 
In commercial (and all other) Uarts, it is the <B>receiver</B> that compensates for the clock difference. The rule is that the transmitter sends 10 bits (Start + 8 Data + Stop), but the receiver only requires 9.5 bits (1 Start + 8 Data + 0.5 Stop). It is this 0.5 bit difference which compensates the clock difference (and which  also gives you the 5% that rick mentioned). 
<BR> 
So far, your design seems correct. But then you try to speed up the transmitter as well by sending less than 10 bits (actually 9.5 bits). 
<BR> 
The net effect is that you have changed the usual "transmit 10 &amp; receive 9.5" scheme to a "transmit 9.5 &amp; receive 9.5" scheme, which is as bad as a "transmit  10 &amp; receive 10" scheme when the clocks are different. By doing this you will neccessarily lose single bits long before your buffer overruns. You stole the 0.5 bits from the receiver that the receiver desperately needs to compensate for the clock differences. 
<BR> 
In other words, your attempt to avoid buffer overflows (which cannot occur since the receiver takes care of the clock frequencies) has actually created the problem you are describing. The solution is simple: don't touch the transmitter. 
<BR> 
BTW., you should check if your 10% refers to clock jitter (moving of the clock edges around a fixed reference point) rather than to a difference in clock frequency. 
<BR> 
/// Juergen
>You bright up a good subject, and you're absolutely correct that if you >continuously send data from one serial port at 9600.01bps to a receiver at >9600, sooner or later there must be a buffer overflow. ...
I think you are missing a key idea. The receiver has to make sure that it will tolerate early start bits. That is the receiver has to start looking for a new start-of-start-bit right after it has checked the middle of the stop bit rather than wait unitl the end of the stop bit to start looking. [Interesting thread. Thanks. I didn't know about that trick.] -- The suespammers.org mail server is located in California. So are all my other mailboxes. Please do not send unsolicited bulk e-mail or unsolicited commercial e-mail to my suespammers.org address or any of my other addresses. These are my opinions, not necessarily my employer's. I hate spam.
"Hal Murray" <hmurray@suespammers.org> wrote in message
news:vs6612g8rso066@corp.supernews.com...
> >You bright up a good subject, and you're absolutely correct that if you > >continuously send data from one serial port at 9600.01bps to a receiver
at
> >9600, sooner or later there must be a buffer overflow. ... > > I think you are missing a key idea. The receiver has to make > sure that it will tolerate early start bits. That is the receiver > has to start looking for a new start-of-start-bit right after > it has checked the middle of the stop bit rather than wait unitl > the end of the stop bit to start looking.
Correct. If you really must tolerate continuous streams, with no data loss, and only a single stop bit, then you must actually be able to 'pass on' data at whatever average rate it comes in at. Some UARTS have the idea of fractional TX stop bits to allow this, most just choose 2 stop bits to give margin. Receive should ALWAYS start looking for START at the middle of STOP. ( but chips do not always get this right :) The OP's idea of a full half bit is a coarse example, that can force an error, as it jumps right to the limit Since this was using 16x BAUD clocking, you can quantize to 1/16 of bit time, or in appx 0.625% steps. 4 of these is 2.5%, or 1/4 bit, that tolerates to 9840 Baud If you don't want to use up all the error budget at one end, that's about the limit. ( 1/4 bit at each end, or a tad less, if you use 3 slot vote sampling ) This is why most uC with trimmed on-chip OSCs specify 2.5% or 2% precision. Since this a PLD device, you could watch for TX buffer phase, and nominally send a full STOP bit, but if the phase indicates margin problems (incomming faster than outgoing) you can decrement the STOP bit in 1/16 fractions - or you could force 15/16 wide STOP bit and use a crystal, and keep the logic simpler. (tolerates to 9660 Baud, 100% traffic ) A purists design would also extend STOP bits fractionally, (17/16 at ip <9600Bd true) so a sudden whole bit jump did not occur. That could cause problems if a number of these are in a chain, and a system with phase jitter, that averages 9600, but is sometimes faster, sometime slower might also exist. Makes the idea of a Continuous BAUD test pattern generator interesting - one that can generate controlled errors on both sides of true, and with dithering. Good student project :) -jg
MAXIMUM error is .5 bit over one frame. In your case frame = 10 bits.
.5/10 = 5%
"GPG" <peg@slingshot.co.nz> wrote in message
news:62069f15.0311250309.f28037c@posting.google.com...
> MAXIMUM error is .5 bit over one frame. In your case frame = 10 bits. > .5/10 = 5%
And if the sender is 2% too slow and the receiver is 2% too fast, you have 4% error which is just below the 5% error tolerated. -- Best Regards Ulf at atmel dot com These comments are intended to be my own opinion and they may, or may not be shared by my employer, Atmel Sweden.
Valentin wrote:
> ... > >I have a "fast" solution for my UART echo device; if transmitter has >transmitted > half of stop bit and sences that there is a next byte received >it stops sending current stop bit and starts transmitting a start bit for >next byte. Untimely ceasing transmission is not good solution because >transmitter may be connected to a good matched or slightly slower running >UART. Design may be not a forwarder thus data provider may differ from 9600 >bps receiver. In this case, starting early transmission of next byte while >remote peer is still receiving stop bit causes stop bit error. > > ...
juergen sauermann wrote:
>Valentin, >apparently you are trying to resolve the clock difference by cutting >the stop bit in order to achieve a higher transmission rate. >That is a very nice idea and it is completely wrong.
And Philip writes: Modifying the local transmited character to be a non-standard length by changing the length of the stop bit on the fly as buffer over-run is about to occur is not a good idea, as you don't know the details of how the receiver that is listening to it was designed, and it may not be very happy to see the next start bit before it is finished with what it expects is a full length stop bit, but it is not. The underlying problem is that you are potentially sending very long streams of data through a protocol that was designed for asynchronous transmission. That is why there are start bits and stop bits. In real systems, there is flow control, typically implemented one of 3 ways: 1) Hardware flow control: CTS/RTS 2) Character based flow control: XON/XOFF (ctrl-q/ctrl-s) 3) Upper layer flow control: packet based transfers with acknowledge packets used to pace transmissions. The "real-time-ness" (new word I just invented) of the flow control depends on the size of the receive buffer. With only 1 byte, you need (1), and even this may not be good enough, you may need at least 2 bytes of buffer. As the buffer gets bigger (say 8 to 100 bytes) then (2) is workable, and can even tolerate some operating system delay. When the buffers get to be multiple packets in size, then (3) may be appropriate. juergen sauermann also wrote:
>In commercial (and all other) Uarts, it is the receiver that >compensates for the clock difference. The rule is that the >transmitter sends 10 bits (Start + 8 Data + Stop), but the >receiver only requires 9.5 bits (1 Start + 8 Data + 0.5 Stop).
Well up to a point this is correct. The receiver can cetainly declare that the character has arrived after the sample is taken in the middle of the stop bit (at 9.5 bit times into the received character). BUT this is not a solution to the original poster's problem! The problem still exists because the remaining .5 bit is still going to arrive, the data is being sent with a slightly faster clock than the transmitter is able to retransmit the character. If there is no line idle time between the end of the inbound stop bit and the next inbound start bit, the system will eventually have an over-run problem, no matter how big the input buffer. The closer the two clock rates, and the bigger the buffer, the longer it takes to happen, but it will happen. Do the math: Let the far end transmitter be running at 1% faster clock rate than the local transmitter that is going to retransmit the character. Here are some easy to work with numbers: Perfect 9600 baud is 104.1666666 microseconds per bit 1 character time (1 Start,8 Data,1 Stop) is 1.041666666 ms After 1 character has arrived, we start to retransmit it. It doesnt matter if we start at the 9.5 or 10 bit time, it will take us 1.010101 times longer to send it than it took to receive. If we have a multibyte buffer, after 100 characters arrive at a far-end transmit rate that is 1% too fast, we have the following: .99 * 100 * 1.041666666 = 103.1249999 ms If our local transmitter is right on spec for baud rate, it will take 104.1666666 to send these characters, this is regardless of whether it starts at the 9.5 or the 10.0 point, because the character is going through a buffer, and changing clock domains. the difference in time will mean that in the time that the local transmitter takes to send 100 characters, the far end transmitter will send 101 charcaters. If our buffer is 10 characters long, then after 1000 characters arrive we will only have managed to offload 990 characters, and our 10 character buffer is full. Some time during the next 100 characters, we will have buffer over-run. juergen sauermann also wrote:
>It is this 0.5 bit difference which compensates the clock >difference (and which also gives you the 5% that rick mentioned).
As you can see above, I disagree. This is not a solution.
>So far, your design seems correct. But then you try to speed >up the transmitter as well by sending less than 10 bits >(actually 9.5 bits). The net effect is that you have changed >the usual "transmit 10 & receive 9.5" scheme to a >"transmit 9.5 & receive 9.5" scheme, which is as bad as a >"transmit 10 & receive 10" scheme when the clocks are different.
Actually, he hasn't changed the receiver to receive 9.5, because the far end transmitter is still sending 10 bits. ignoring the last .5 bit does not solve the problem, as it is accumulative.
>By doing this you will neccessarily lose single bits long before >your buffer overruns. You stole the 0.5 bits from the receiver >that the receiver desperately needs to compensate for the clock >differences.
Nope. This does not work.
>In other words, your attempt to avoid buffer overflows (which >cannot occur since the receiver takes care of the clock >frequencies) has actually created the problem you are describing. >The solution is simple: don't touch the transmitter.
Nope. This does not work. The following solutions can be made to work: A) Use one of the 3 described flow control systems above, with a suitable length buffer, or some other flow control system with similar effect. B) Deliberately force some idle time between characters at the far end transmitter. If your system is designed for a worst case of 5% difference in clock frequencies, forcing an idle between the stop bit and the next start bit of .6 bit time will achieve this (with some minor safety margin). You will still need some buffer though between your receiver and transmitter. Another version of this is to just add some idle time every N characters, such as "every 100 characters, let the go to sleep for 2 character time". C) Use a PLL to derive a local clock that is phase locked to the received data, and use this for transmit. D) At the far end transmitter, add some pad characters at regular times to the data stream, that can be thrown away at the receiver. E) run a clock line from the far end transmitter to your system and use that for your transmit clock (hardly an async system any more) F) Be sneaky. Most UARTs can be set for 1 , 1.5 , or 2 stop bits. Set the far end transmitter for 8N2 (1 start, 8 data, 2 stop). Set your receiver and transmitter for 8N1 (1 start, 8 data, 1 stop). This works, because stop bits look just like line-idle. This effectively implements (B), but is localized to the initialization code for the far end transmitter. Philip Freidin Philip Freidin Fliptronics