R
rickman
Guest
On 9/13/2014 7:34 PM, Phil Hobbs wrote:
I believe Phil's point is that metastability is a problem no matter how
it is perceived. I don't think you can analyze this circuit to come up
with a number of MTBF of the synchronizer FF. John said in an earlier
post that 100's of ps of metastability won't be a problem, but the
nature of metastability is that you can't predict the duration. The
best you can hope for is to characterize the average frequency of
failure and in this case the feedback will be pushing the circuit to the
point of failure.
In fact, that is the difference between this circuit and one built to
test the metastability characteristics of a device... that circuit would
have a reproducible and defined distribution of the edges being
coincident while this one is hard to characterize and is actually trying
to maximize metastability.
Adding a second FF to reclock the output of the first *will* greatly
improve your MTBF and cost next to nothing relative to the rest of the
design. But then the impact of a failure may be insignificant. What
happens to the filtered output if the output of the FF is in a random
state or even oscillates for some time between the 80 kHz samples?
--
Rick
On 9/13/2014 7:25 PM, John Larkin wrote:
On Sat, 13 Sep 2014 18:18:58 -0400, Phil Hobbs
hobbs@electrooptical.net> wrote:
On 9/13/2014 5:28 PM, John Larkin wrote:
On Sat, 13 Sep 2014 01:55:27 -0400, rickman <gnuarm@gmail.com> wrote:
On 9/13/2014 12:44 AM, Bill Sloman wrote:
The PPL loop that ties the 155.52MHz VCXO output to the 10MHz
reference output stops long term phase shift between the two, but
control only the relative phase-shift, not the absolute phase shift.
I don't follow this at all. What is the difference between the two in
this case? There will be a relative measurement and that will be
brought to zero by the loop. So what is the "absolute" phase
shift? Or
are you talking about the short term phase shift of the VCXO?
Short term phase jitter inside the PLL feedback path doesn't
matter, so long as it long-term averages to less than a picosecond
- or whatever - before it can build up enough to shift the phase
of the 155.52MHz VCXO.
That is not clear to me. "Build up" is not something I can picture in
this circuit. Any deviation will result in noise and phase shift
of the
VCXO output. The question is just how large that deviation is. I
guess
you are saying that the filter can average out the deviations well
enough. John seems to think that can cause other problems or maybe he
just doesn't have confidence in this idea because he hasn't played
with
it yet. He admits he is not a math guy and prefers to test and
simulate.
I did the ECL bang-bang thing to recover the clock from the 155.52 MHz
data stream, but the flop was clocked at 77.76 MHz. But my new problem
is to generate that data stream, and I'm given a 10 MHz source to lock
to.
As far as math goes, analyzing the bang-bang PLL is messy. I've only
seen a couple of papers on the subject, not very helpful, and most PLL
texts don't even mention the technique.
I did analyze the one at 77 MHz, making some simplifying assumptions,
but the 80 KHz comparison frequency is scary. I was hoping a
discussion would suggest a cute way to improve it.
The fact that a phase detector operating at 80kHz could have a
1psec relative stability misses the point that dividing the
155.52MHz down to 80kHz introduces more than a 1psec of phase
shift, as does dividing 10MHz down to 80kHz.
What I read was that John would divide one of the frequencies to
determine which edge of the clock to compare, but the comparison would
be done between the two clocks, not the divided clock. The divided
clock would be used as an enable on the phase comparison in effect.
Actually, I only need to divide one of the clocks to 80 KHz; then the
Dflop can compare that to the un-divided other one. I'll divide in an
FPGA but resync with an Eclips flipflop, so the division will add
sub-ps jitter to the overall loop.
Anecdotally, double resynchronizers in different packages are better
than single ones, but I don't have measurements to back that up.
Cheers
Phil Hobbs
The bang-bang PLL bears a remarkable similarity to the circuit that's
used in IC testing to force metastability.
Resyncing the FPGA divider has no hazards, because we can provide safe
setup and hold times.
Understood. Anecdotally, as I say, two stages of resynchronization
exhibit usefully less jitter than one. YMMV.
I believe Phil's point is that metastability is a problem no matter how
it is perceived. I don't think you can analyze this circuit to come up
with a number of MTBF of the synchronizer FF. John said in an earlier
post that 100's of ps of metastability won't be a problem, but the
nature of metastability is that you can't predict the duration. The
best you can hope for is to characterize the average frequency of
failure and in this case the feedback will be pushing the circuit to the
point of failure.
In fact, that is the difference between this circuit and one built to
test the metastability characteristics of a device... that circuit would
have a reproducible and defined distribution of the edges being
coincident while this one is hard to characterize and is actually trying
to maximize metastability.
Adding a second FF to reclock the output of the first *will* greatly
improve your MTBF and cost next to nothing relative to the rest of the
design. But then the impact of a failure may be insignificant. What
happens to the filtered output if the output of the FF is in a random
state or even oscillates for some time between the 80 kHz samples?
--
Rick