EDK : FSL macros defined by Xilinx are wrong

Den onsdag den 12. april 2017 kl. 05.37.07 UTC+2 skrev John Larkin:
On Tue, 11 Apr 2017 09:29:03 -0700 (PDT), Kevin Neilson
kevin.neilson@xilinx.com> wrote:

On Monday, April 10, 2017 at 7:13:23 PM UTC-6, John Larkin wrote:
We have a ZYNQ whose predicted timing isn't meeting decent margins.
And we don't want a lot of output pin timing variation in real life.

We can measure the chip temperature with the XADC thing. So, why not
make an on-chip heater? Use a PLL to clock a bunch of flops, and vary
the PLL output frequency to keep the chip temp roughly constant.

I'm confused by the concept. Doesn't timing get *worse* as temp increases?

Prop delays get slower.

How would a higher temperature help?

High temperature is an unfortunate fact of life some times. I'm after
constant temperature, to minimize delay variations as ambient temp and
logic power dissipations change.

By "output pin timing variation" do you mean that there are combinatorial output paths? I think the best best is to stay as cool as possible and keep all outputs registered.

All our critical outputs are registered in the i/o cells. Xilinx tools
report almost a 3:1 delay range from clock to outputs, over the full
range of process, power supply, and temperature. Apparently the tools
assume the max specified Vcc and temperature spreads for the part and
don't let us tease out anything, or restrict the analysis to any
narrower ranges.


If you really need to control output delay you can use the IODELAY block, possibly along with a copper trace feedback line.


Our output data-valid window is predicted by the tools to be very
narrow relative to the clock period. We figure that controlling the
temperature (and maybe controlling Vcc-core vs temperature) will open
up the timing window. The final analysis will have to be experimental.

We can't crank in a constant delay to fix anything; the problem is the
predicted variation in delay.

that is basically what the IDELAY/ODELAY blocks are for, you instantiate an IDELAYCTRL and feed it a ~200MHz clock and it uses that a reference to
reduce the effects of process, voltage, and temperature on the iodelay
 
If you really need to control output delay you can use the IODELAY block, possibly along with a copper trace feedback line.


Our output data-valid window is predicted by the tools to be very
narrow relative to the clock period. We figure that controlling the
temperature (and maybe controlling Vcc-core vs temperature) will open
up the timing window. The final analysis will have to be experimental.

We can't crank in a constant delay to fix anything; the problem is the
predicted variation in delay.

I still think the IODELAY could help you. The output goes through an adjustable IODELAY, then you route the output back in through a pin, adjust the input IODELAY to figure out where the incoming edge is, and then use a feedback loop to keep the output delay constant. It's a technique used for deskewing DRAM data. I think the main clock would also have to be deskewed with a BUFG so you have a good reference for the input. Or, if you characterized the delay-vs-temp in the lab, you could run in open-loop mode by adjusting the IODELAY tap based on the temperature you read.

Yes, the tools are definitely pessimistic. They're only useful for worst-case. I'm pretty sure you can put in the max temperature when doing PAR, so you could isolate the effects of just that, but it will still probably be worse variation than in reality.
 
On Wed, 12 Apr 2017 12:37:59 -0700 (PDT), Kevin Neilson
<kevin.neilson@xilinx.com> wrote:

If you really need to control output delay you can use the IODELAY block, possibly along with a copper trace feedback line.


Our output data-valid window is predicted by the tools to be very
narrow relative to the clock period. We figure that controlling the
temperature (and maybe controlling Vcc-core vs temperature) will open
up the timing window. The final analysis will have to be experimental.

We can't crank in a constant delay to fix anything; the problem is the
predicted variation in delay.


I still think the IODELAY could help you. The output goes through an adjustable IODELAY, then you route the output back in through a pin, adjust the input IODELAY to figure out where the incoming edge is, and then use a feedback loop to keep the output delay constant. It's a technique used for deskewing DRAM data. I think the main clock would also have to be deskewed with a BUFG so you have a good reference for the input. Or, if you characterized the delay-vs-temp in the lab, you could run in open-loop mode by adjusting the IODELAY tap based on the temperature you read.

Yes, the tools are definitely pessimistic. They're only useful for worst-case. I'm pretty sure you can put in the max temperature when doing PAR, so you could isolate the effects of just that, but it will still probably be worse variation than in reality.

My FPGA guy says that the ZYNQ does not have adjustable delay after
the i/o block flops. We can vary drive strength in four steps, and we
may be able to do something with that.


--

John Larkin Highland Technology, Inc
picosecond timing precision measurement

jlarkin att highlandtechnology dott com
http://www.highlandtechnology.com
 
On 4/12/2017 4:20 PM, John Larkin wrote:
On Wed, 12 Apr 2017 12:37:59 -0700 (PDT), Kevin Neilson
kevin.neilson@xilinx.com> wrote:

If you really need to control output delay you can use the IODELAY block, possibly along with a copper trace feedback line.


Our output data-valid window is predicted by the tools to be very
narrow relative to the clock period. We figure that controlling the
temperature (and maybe controlling Vcc-core vs temperature) will open
up the timing window. The final analysis will have to be experimental.

We can't crank in a constant delay to fix anything; the problem is the
predicted variation in delay.


I still think the IODELAY could help you. The output goes through an adjustable IODELAY, then you route the output back in through a pin, adjust the input IODELAY to figure out where the incoming edge is, and then use a feedback loop to keep the output delay constant. It's a technique used for deskewing DRAM data. I think the main clock would also have to be deskewed with a BUFG so you have a good reference for the input. Or, if you characterized the delay-vs-temp in the lab, you could run in open-loop mode by adjusting the IODELAY tap based on the temperature you read.

Yes, the tools are definitely pessimistic. They're only useful for worst-case. I'm pretty sure you can put in the max temperature when doing PAR, so you could isolate the effects of just that, but it will still probably be worse variation than in reality.

My FPGA guy says that the ZYNQ does not have adjustable delay after
the i/o block flops. We can vary drive strength in four steps, and we
may be able to do something with that.

That's also not adjustable in real time though.

I believe what the others are talking about is a real time adjustable
delay that is built into the clocking module. I don't know about the
Zynq, but Xilinx has what they call a delay locked loop which sounds
exactly like what you need. I believe it works by syncing the output
signal to the clock signal. There will be some signal path in the
feedback loop which will still cause timing variation with temperature
and I suppose voltage, but the variation in process can be compensated.

--

Rick C
 
On Wednesday, 4/12/2017 4:27 PM, rickman wrote:
On 4/12/2017 4:20 PM, John Larkin wrote:
On Wed, 12 Apr 2017 12:37:59 -0700 (PDT), Kevin Neilson
kevin.neilson@xilinx.com> wrote:

If you really need to control output delay you can use the IODELAY
block, possibly along with a copper trace feedback line.


Our output data-valid window is predicted by the tools to be very
narrow relative to the clock period. We figure that controlling the
temperature (and maybe controlling Vcc-core vs temperature) will open
up the timing window. The final analysis will have to be experimental.

We can't crank in a constant delay to fix anything; the problem is the
predicted variation in delay.


I still think the IODELAY could help you. The output goes through an
adjustable IODELAY, then you route the output back in through a pin,
adjust the input IODELAY to figure out where the incoming edge is,
and then use a feedback loop to keep the output delay constant. It's
a technique used for deskewing DRAM data. I think the main clock
would also have to be deskewed with a BUFG so you have a good
reference for the input. Or, if you characterized the delay-vs-temp
in the lab, you could run in open-loop mode by adjusting the IODELAY
tap based on the temperature you read.

Yes, the tools are definitely pessimistic. They're only useful for
worst-case. I'm pretty sure you can put in the max temperature when
doing PAR, so you could isolate the effects of just that, but it will
still probably be worse variation than in reality.

My FPGA guy says that the ZYNQ does not have adjustable delay after
the i/o block flops. We can vary drive strength in four steps, and we
may be able to do something with that.

That's also not adjustable in real time though.

I believe what the others are talking about is a real time adjustable
delay that is built into the clocking module. I don't know about the
Zynq, but Xilinx has what they call a delay locked loop which sounds
exactly like what you need. I believe it works by syncing the output
signal to the clock signal. There will be some signal path in the
feedback loop which will still cause timing variation with temperature
and I suppose voltage, but the variation in process can be compensated.

In the 7-series what you want is the MMCM, which has the ability to
adjust the output phase in steps of 1/56 of the VCO period. This
adjustment can be applied to a subset of the MMCM outputs, so you
can for example vary the outgoing clock phase while keeping the
data phase constant with respect to the clock driving the MMCM.

On the other hand, the whole point of a source synchronous interface
is to just need low skew between outputs - not low skew between the
input clock and the outputs. Typically just placing the outputs in
the IOB and using the same clock resource is good enough. Skew
between outputs is much lower than the variance in output delay.

--
Gabor

--
Gabor
 
On 4/12/2017 5:16 PM, Gabor wrote:
On Wednesday, 4/12/2017 4:27 PM, rickman wrote:
On 4/12/2017 4:20 PM, John Larkin wrote:
On Wed, 12 Apr 2017 12:37:59 -0700 (PDT), Kevin Neilson
kevin.neilson@xilinx.com> wrote:

If you really need to control output delay you can use the IODELAY
block, possibly along with a copper trace feedback line.


Our output data-valid window is predicted by the tools to be very
narrow relative to the clock period. We figure that controlling the
temperature (and maybe controlling Vcc-core vs temperature) will open
up the timing window. The final analysis will have to be experimental.

We can't crank in a constant delay to fix anything; the problem is the
predicted variation in delay.


I still think the IODELAY could help you. The output goes through
an adjustable IODELAY, then you route the output back in through a
pin, adjust the input IODELAY to figure out where the incoming edge
is, and then use a feedback loop to keep the output delay constant.
It's a technique used for deskewing DRAM data. I think the main
clock would also have to be deskewed with a BUFG so you have a good
reference for the input. Or, if you characterized the delay-vs-temp
in the lab, you could run in open-loop mode by adjusting the IODELAY
tap based on the temperature you read.

Yes, the tools are definitely pessimistic. They're only useful for
worst-case. I'm pretty sure you can put in the max temperature when
doing PAR, so you could isolate the effects of just that, but it
will still probably be worse variation than in reality.

My FPGA guy says that the ZYNQ does not have adjustable delay after
the i/o block flops. We can vary drive strength in four steps, and we
may be able to do something with that.

That's also not adjustable in real time though.

I believe what the others are talking about is a real time adjustable
delay that is built into the clocking module. I don't know about the
Zynq, but Xilinx has what they call a delay locked loop which sounds
exactly like what you need. I believe it works by syncing the output
signal to the clock signal. There will be some signal path in the
feedback loop which will still cause timing variation with temperature
and I suppose voltage, but the variation in process can be compensated.


In the 7-series what you want is the MMCM, which has the ability to
adjust the output phase in steps of 1/56 of the VCO period. This
adjustment can be applied to a subset of the MMCM outputs, so you
can for example vary the outgoing clock phase while keeping the
data phase constant with respect to the clock driving the MMCM.

On the other hand, the whole point of a source synchronous interface
is to just need low skew between outputs - not low skew between the
input clock and the outputs. Typically just placing the outputs in
the IOB and using the same clock resource is good enough. Skew
between outputs is much lower than the variance in output delay.

Yeah, well, it's not like we really know the true and full problem. We
just know he doesn't like the timing range reported by the tools.

--

Rick C
 
Den onsdag den 12. april 2017 kl. 22.20.19 UTC+2 skrev John Larkin:
On Wed, 12 Apr 2017 12:37:59 -0700 (PDT), Kevin Neilson
kevin.neilson@xilinx.com> wrote:

If you really need to control output delay you can use the IODELAY block, possibly along with a copper trace feedback line.


Our output data-valid window is predicted by the tools to be very
narrow relative to the clock period. We figure that controlling the
temperature (and maybe controlling Vcc-core vs temperature) will open
up the timing window. The final analysis will have to be experimental.

We can't crank in a constant delay to fix anything; the problem is the
predicted variation in delay.


I still think the IODELAY could help you. The output goes through an adjustable IODELAY, then you route the output back in through a pin, adjust the input IODELAY to figure out where the incoming edge is, and then use a feedback loop to keep the output delay constant. It's a technique used for deskewing DRAM data. I think the main clock would also have to be deskewed with a BUFG so you have a good reference for the input. Or, if you characterized the delay-vs-temp in the lab, you could run in open-loop mode by adjusting the IODELAY tap based on the temperature you read.

Yes, the tools are definitely pessimistic. They're only useful for worst-case. I'm pretty sure you can put in the max temperature when doing PAR, so you could isolate the effects of just that, but it will still probably be worse variation than in reality.

My FPGA guy says that the ZYNQ does not have adjustable delay after
the i/o block flops. We can vary drive strength in four steps, and we
may be able to do something with that.

you are right the 7010 and 7020 only have high range IO so no odelay

are you just trying to keep a fixed alignment between clock and data output?

you can do tricks with DDR output flops, data out with a DDR with both inputs
as data, clock out with a DDR with 0,1 as input

-Lasse
 
My FPGA guy says that the ZYNQ does not have adjustable delay after
the i/o block flops. We can vary drive strength in four steps, and we
may be able to do something with that.

Hmm. I've used a real-time-adjustable ODELAY block, but that wasn't in a Zynq.

If you can add more hardware to the board, you could re-register the data in some external 74LS flops.

You could use unregistered outputs and make your own delay line with a carry chain, which you can create with behavioral code.
 
On Wed, 12 Apr 2017 15:22:25 -0700 (PDT), Kevin Neilson
<kevin.neilson@xilinx.com> wrote:

My FPGA guy says that the ZYNQ does not have adjustable delay after
the i/o block flops. We can vary drive strength in four steps, and we
may be able to do something with that.


Hmm. I've used a real-time-adjustable ODELAY block, but that wasn't in a Zynq.

If you can add more hardware to the board, you could re-register the data in some external 74LS flops.

We are exactly trying to drive external flops, some 1 ns CMOS parts.
They are clocked by the same clock that is going into the ZYNQ, and
the FPGA needs to set up their D inputs reliably. We can't use a PLL
or DLL inside the FPGA.

So the problem is that the Xilinx tools are reporting a huge (almost
3:1) spread in possible prop delay from our applied clock to the iob
outputs. The tools apparently assume the max process+temperature+power
supply limits, without letting us constrain these, and without
assigning any specific blame.


You could use unregistered outputs and make your own delay line with a carry chain, which you can create with behavioral code.

I think that has even higher uncertainty, probably more than a full
clock period, so we couldn't reliably load those external flops.


--

John Larkin Highland Technology, Inc
picosecond timing precision measurement

jlarkin att highlandtechnology dott com
http://www.highlandtechnology.com
 
On 4/12/2017 7:21 PM, John Larkin wrote:
On Wed, 12 Apr 2017 15:22:25 -0700 (PDT), Kevin Neilson
kevin.neilson@xilinx.com> wrote:

My FPGA guy says that the ZYNQ does not have adjustable delay after
the i/o block flops. We can vary drive strength in four steps, and we
may be able to do something with that.


Hmm. I've used a real-time-adjustable ODELAY block, but that wasn't in a Zynq.

If you can add more hardware to the board, you could re-register the data in some external 74LS flops.

We are exactly trying to drive external flops, some 1 ns CMOS parts.
They are clocked by the same clock that is going into the ZYNQ, and
the FPGA needs to set up their D inputs reliably. We can't use a PLL
or DLL inside the FPGA.

So the problem is that the Xilinx tools are reporting a huge (almost
3:1) spread in possible prop delay from our applied clock to the iob
outputs. The tools apparently assume the max process+temperature+power
supply limits, without letting us constrain these, and without
assigning any specific blame.



You could use unregistered outputs and make your own delay line with a carry chain, which you can create with behavioral code.

I think that has even higher uncertainty, probably more than a full
clock period, so we couldn't reliably load those external flops.

The way you have constrained the design I think you will need to design
your own chip. I would say you need to find a way to relax one of your
many constraints. Not using the PLL/DLL is a real killer. That would
be a good one to fix.

I haven't use the Xilinx tools in a long time, but I seem to recall
there was a way to work with a single temperature. It may have been the
hot number or the cold number, but not an arbitrary value in between.
But that may have been the post layout simulation timing. Simulation is
not a great way to verify timing in general, but it could be made to
work for your case. I'd say get a Xilinx FAE involved.

--

Rick C
 
We are exactly trying to drive external flops, some 1 ns CMOS parts.
They are clocked by the same clock that is going into the ZYNQ, and
the FPGA needs to set up their D inputs reliably. We can't use a PLL
or DLL inside the FPGA.

So the problem is that the Xilinx tools are reporting a huge (almost
3:1) spread in possible prop delay from our applied clock to the iob
outputs. The tools apparently assume the max process+temperature+power
supply limits, without letting us constrain these, and without
assigning any specific blame.

Like Lasse said above, you can adjust the output delay with a half-cycle resolution using ODDRs. This sounds good enough for your application. I used that exact method once for a DRAM (single-data-rate) interface. (I think the training method was to write data to an unused location in DRAM with various phase relationships, read it back, and see which writes were successful.) Your issue sounds a lot like the same issues people have with DRAM. I don't think you'll see a 3:1 variation in reality.
 
On Thu, 13 Apr 2017 15:22:52 -0700 (PDT), Kevin Neilson
<kevin.neilson@xilinx.com> wrote:

We are exactly trying to drive external flops, some 1 ns CMOS parts.
They are clocked by the same clock that is going into the ZYNQ, and
the FPGA needs to set up their D inputs reliably. We can't use a PLL
or DLL inside the FPGA.

So the problem is that the Xilinx tools are reporting a huge (almost
3:1) spread in possible prop delay from our applied clock to the iob
outputs. The tools apparently assume the max process+temperature+power
supply limits, without letting us constrain these, and without
assigning any specific blame.

Like Lasse said above, you can adjust the output delay with a half-cycle resolution using ODDRs.

I can declare the differential-input clock polarity either way, which
would shift things 3.5 ns (out of a 7 ns clock.) But the guaranteed
data-valid window is less than 2 ns.


> This sounds good enough for your application. I used that exact method once for a DRAM (single-data-rate) interface. (I think the training method was to write data to an unused location in DRAM with various phase relationships, read it back, and see which writes were successful.) Your issue sounds a lot like the same issues people have with DRAM. I don't think you'll see a 3:1 variation in reality.

I sure hope so.



--

John Larkin Highland Technology, Inc

lunatic fringe electronics
 
Den fredag den 14. april 2017 kl. 06.12.52 UTC+2 skrev John Larkin:
On Thu, 13 Apr 2017 15:22:52 -0700 (PDT), Kevin Neilson
kevin.neilson@xilinx.com> wrote:


We are exactly trying to drive external flops, some 1 ns CMOS parts.
They are clocked by the same clock that is going into the ZYNQ, and
the FPGA needs to set up their D inputs reliably. We can't use a PLL
or DLL inside the FPGA.

So the problem is that the Xilinx tools are reporting a huge (almost
3:1) spread in possible prop delay from our applied clock to the iob
outputs. The tools apparently assume the max process+temperature+power
supply limits, without letting us constrain these, and without
assigning any specific blame.

Like Lasse said above, you can adjust the output delay with a half-cycle resolution using ODDRs.

I can declare the differential-input clock polarity either way, which
would shift things 3.5 ns (out of a 7 ns clock.) But the guaranteed
data-valid window is less than 2 ns.

the point of using DDR was not to shift the clock but to keep the clock and
data aligned

"regenerating" the clock with a DDR, means the clock and data gets treated the same and both have the same path DDR-IOB so they should track

getting the output clock aligned with the input clock (if needed) might be possible using the "zero-delay-buffer" mode of the MMCM
 
On 11 Apr 2017 09:52:20 -0700, Winfield Hill
<hill@rowland.harvard.edu> wrote:

Here's my fan speed controller. Quite serious.
https://www.dropbox.com/s/7gsrmb9uci1wdb9/RIS-764Gb_fan-speed-controller.JPG?dl=0

First there's an LM35 TO-220-package temp sensor
mounted to the heat sink, amplify and offset its
10mV/deg signal by 11x, to generate a fan-speed
voltage, present to a TC647 fan-speed PWM chip,
add optional MOSFET for when using a non-PWM fan.
E.g., cool, fan runs at 0%, ramps its speed over
a 30 to 40 degree range, thereafter runs at 100%.
TC647 chip senses stalled fan, makes error signal.

Complicated. Here's my simple controller.

https://www.dropbox.com/s/u6b7ujxv3y5g20p/Tnduction_fan_controller.pdf?dl=1

The ATtiny88 is about as cheap as the LM35. The microprocessor allows
many things other than simple linear control. It has derivative
action, for instance, to catch a rapidly heating heat sink before it
gets very hot. Other features include a POST that, among other
things, spins the fan to full speed for about a second before settling
down into the control loop. I've found some fans that have enough
bearing stiction that they won't start at the 20% of full voltage that
idle provides. So the full power pulse gives them a starting kick.

The 3rd output is designed to allow any voltage up to the transistor's
limit to be controlled. I designed this controller for a device that
had a 45 volt fan.

The board is tiny - about the size of 2 postage stamps. The LM35 in a
surface mount package is on the bottom of the board. The board is
designed to be glued in place with the LM35 in contact with the heat
sink. I use the E6000 neoprene adhesive available at Wal-mart in
their marine department or in calking gun tubes from McMaster. It is
VERY strong but pulls loose from the substrate when stretched.

I'm going to open source this design so as soon as I get a round tuit,
I'll put the CAD files (KiCAD) and firmware source and hex on
http://www.neon-john.com.

John
John DeArmond
http://www.neon-john.com
http://www.tnduction.com
Tellico Plains, Occupied TN
See website for email address
 
Neon John wrote...
Complicated. Here's my simple controller.

https://www.dropbox.com/s/u6b7ujxv3y5g20p/Tnduction_fan_controller.pdf?dl=1

The ATtiny88 is about as cheap as the LM35. The microprocessor allows
many things other than simple linear control. It has derivative
action, for instance, to catch a rapidly heating heat sink before it
gets very hot. Other features include a POST that, among other
things, spins the fan to full speed for about a second before settling
down into the control loop. I've found some fans that have enough
bearing stiction that they won't start at the 20% of full voltage that
idle provides. So the full power pulse gives them a starting kick.

The 3rd output is designed to allow any voltage up to the transistor's
limit to be controlled. I designed this controller for a device that
had a 45 volt fan.

The board is tiny - about the size of 2 postage stamps. The LM35 in a
surface mount package is on the bottom of the board. The board is
designed to be glued in place with the LM35 in contact with the heat
sink. I use the E6000 neoprene adhesive available at Wal-mart in
their marine department or in calking gun tubes from McMaster. It is
VERY strong but pulls loose from the substrate when stretched.

I'm going to open source this design so as soon as I get a round tuit,
I'll put the CAD files (KiCAD) and firmware source and hex on
http://www.neon-john.com.

John
John DeArmond
http://www.neon-john.com
http://www.tnduction.com
Tellico Plains, Occupied TN
See website for email address

Thanks John, that is pretty simple. Of course the processor
requires a program, but if you put it up, that's good. And
include the controller code-burning instructions.


--
Thanks,
- Win
 
On Fri, 14 Apr 2017 11:47:57 -0400, Neon John <no@never.com> wrote:

On 11 Apr 2017 09:52:20 -0700, Winfield Hill
hill@rowland.harvard.edu> wrote:

Here's my fan speed controller. Quite serious.
https://www.dropbox.com/s/7gsrmb9uci1wdb9/RIS-764Gb_fan-speed-controller.JPG?dl=0

First there's an LM35 TO-220-package temp sensor
mounted to the heat sink, amplify and offset its
10mV/deg signal by 11x, to generate a fan-speed
voltage, present to a TC647 fan-speed PWM chip,
add optional MOSFET for when using a non-PWM fan.
E.g., cool, fan runs at 0%, ramps its speed over
a 30 to 40 degree range, thereafter runs at 100%.
TC647 chip senses stalled fan, makes error signal.


Complicated. Here's my simple controller.

https://www.dropbox.com/s/u6b7ujxv3y5g20p/Tnduction_fan_controller.pdf?dl=1

The ATtiny88 is about as cheap as the LM35. The microprocessor allows
many things other than simple linear control. It has derivative
action, for instance, to catch a rapidly heating heat sink before it
gets very hot. Other features include a POST that, among other
things, spins the fan to full speed for about a second before settling
down into the control loop. I've found some fans that have enough
bearing stiction that they won't start at the 20% of full voltage that
idle provides. So the full power pulse gives them a starting kick.

The 3rd output is designed to allow any voltage up to the transistor's
limit to be controlled. I designed this controller for a device that
had a 45 volt fan.

The board is tiny - about the size of 2 postage stamps. The LM35 in a
surface mount package is on the bottom of the board. The board is
designed to be glued in place with the LM35 in contact with the heat
sink. I use the E6000 neoprene adhesive available at Wal-mart in
their marine department or in calking gun tubes from McMaster. It is
VERY strong but pulls loose from the substrate when stretched.

I'm going to open source this design so as soon as I get a round tuit,
I'll put the CAD files (KiCAD) and firmware source and hex on
http://www.neon-john.com.

John
John DeArmond
http://www.neon-john.com
http://www.tnduction.com
Tellico Plains, Occupied TN
See website for email address

The LM35 is supposedly not c-load stable, but most things are c-load
stable with enough c.

It is a kinda tricky part.


--

John Larkin Highland Technology, Inc
picosecond timing precision measurement

jlarkin att highlandtechnology dott com
http://www.highlandtechnology.com
 
On 4/14/2017 8:10 AM, lasselangwadtchristensen@gmail.com wrote:
Den fredag den 14. april 2017 kl. 06.12.52 UTC+2 skrev John Larkin:
On Thu, 13 Apr 2017 15:22:52 -0700 (PDT), Kevin Neilson
kevin.neilson@xilinx.com> wrote:


We are exactly trying to drive external flops, some 1 ns CMOS parts.
They are clocked by the same clock that is going into the ZYNQ, and
the FPGA needs to set up their D inputs reliably. We can't use a PLL
or DLL inside the FPGA.

So the problem is that the Xilinx tools are reporting a huge (almost
3:1) spread in possible prop delay from our applied clock to the iob
outputs. The tools apparently assume the max process+temperature+power
supply limits, without letting us constrain these, and without
assigning any specific blame.

Like Lasse said above, you can adjust the output delay with a half-cycle resolution using ODDRs.

I can declare the differential-input clock polarity either way, which
would shift things 3.5 ns (out of a 7 ns clock.) But the guaranteed
data-valid window is less than 2 ns.


the point of using DDR was not to shift the clock but to keep the clock and
data aligned

"regenerating" the clock with a DDR, means the clock and data gets treated the same and both have the same path DDR-IOB so they should track

getting the output clock aligned with the input clock (if needed) might be possible using the "zero-delay-buffer" mode of the MMCM

He has already said he has some constraint that won't let him use a PLL
or DLL inside the FPGA, so I expect he can't use this either. I can't
imagine what his constraint is, maybe the clock is not regular like a
typical clock but rather is an async data strobe.

I don't recall the upper limits of what can be done with the SERDES that
most FPGAs have on chip. But they all run at multi-GHz data rates and
are clocked much slower with an internal clock multiplier. Likely that
can't be used either.

It does seem pretty silly to be adjusting timing by varying the
temperature of the die. Not just crude, but fairly ineffective as
delays are controlled by local temperature and a die can have hot spots.
If the full problem were explained perhaps a solution could be offered.

Anyone know what a "1 ns CMOS part" means?

--

Rick C
 
On 14 Apr 2017 10:07:41 -0700, Winfield Hill
<hill@rowland.harvard.edu> wrote:


Thanks John, that is pretty simple. Of course the processor
requires a program, but if you put it up, that's good. And
include the controller code-burning instructions.

I'll put up the code and instructions on programming. I used Atmel's
Studio development environment (unfortunately based on Microsoft
Studio) for development (use GNU-C suite backend) and use AVRDUDE in a
command script to program in production.

The genuine Atmel programmer is about $35 from Digikey et al but there
are plenty of articles on the web about how to build $5 programmers. I
notice that AVRDUDE even has a model based on a parallel port. Check
out http://www.avrfreaks.com.

John

John DeArmond
http://www.neon-john.com
http://www.tnduction.com
Tellico Plains, Occupied TN
See website for email address
 
On Fri, 14 Apr 2017 10:58:11 -0700, John Larkin
<jjlarkin@highland_snip_technology.com> wrote:


The LM35 is supposedly not c-load stable, but most things are c-load
stable with enough c.

It is a kinda tricky part.

It is in this design. There are several hundred of these boards in
service in our products. I added the cap because the processor was
picking up lots of RFI from the intense RF field inside the induction
heater.

John
John DeArmond
http://www.neon-john.com
http://www.tnduction.com
Tellico Plains, Occupied TN
See website for email address
 
On Sat, 15 Apr 2017 10:23:42 -0400, Neon John <no@never.com> wrote:

On Fri, 14 Apr 2017 10:58:11 -0700, John Larkin
jjlarkin@highland_snip_technology.com> wrote:


The LM35 is supposedly not c-load stable, but most things are c-load
stable with enough c.

It is a kinda tricky part.

It is in this design. There are several hundred of these boards in
service in our products. I added the cap because the processor was
picking up lots of RFI from the intense RF field inside the induction
heater.

John
John DeArmond
http://www.neon-john.com
http://www.tnduction.com
Tellico Plains, Occupied TN
See website for email address

LM35 output is an emitter follower with a weak pulldown. An ideal RF
detector.

And it latches up if it possibly can.

LM71, SPI interface, is a nice part.


--

John Larkin Highland Technology, Inc

lunatic fringe electronics
 

Welcome to EDABoard.com

Sponsor

Back
Top