EDAboard.com | EDAboard.eu | EDAboard.de | EDAboard.co.uk | RTV forum PL | NewsGroups PL

Windows rename

Ask a question - edaboard.com

elektroda.net NewsGroups Forum Index - Electronics Design - Windows rename

Goto page Previous  1, 2, 3 ... 15, 16, 17, 18, 19  Next

Jeroen Belleman
Guest

Wed Aug 31, 2016 1:16 pm   



On 2016-08-31 01:28, Don Y wrote:
Quote:
On 8/30/2016 1:02 PM, Jeroen Belleman wrote:
On 30/08/16 20:37, Don Y wrote:
On 8/30/2016 10:39 AM, John Larkin wrote:
Floats solve a lot of scaling dilemmas.

Floats are the PROBLEM!

In an instrument, it's really convenient to do all the math -
calibrations, display formatting, web pages, maybe polynomial
corrections, filtering, like that - as floats, then clip/scale for
DACs or whatever. It's really easier than explaining a zillion
fixed-point shift-and-scale operations to an embedded programmer.

It doesn't matter if he's still trying to apply them
incorrectly. A float still has a fixed size representation in
the machine. Add 1,000,000,000 to 0.00 000 001 and tell me
what you get. Or, subtract 1,234,567.00 from 1,234,568.00.

Those are not many things that many electronic instruments need to do
to PPB resolution. But an embeded programmer should be aware of the
properties of single and double floats. And of integers!

The only things that we do that need extreme dynamic range are
frequencies and time intervals. Some of our time-interval counters and
delay generators can handle 0 to 999 seconds with 1 ps resolution. We
generally represent that as 64 bits with a 1 ps LSB.

It comes into play in a surpring number of (varied!) places.

for (float cal_point = START; cal_point < MAX; cal_point +=
INCREMENT)

will yield different results than:

for (float cal_offset = 0; cal_offset < MAX-START; cal_offset +=
INCREMENT)
cal_point = START + cal_offset

if START >> INCREMENT.

(In some cases, the loop will never terminate!)

Anyone showing me code with either of these will get a free sermon
about the dangers of floating point numbers.

Really? You've *never* graphed an equation (evaluated a function)
where the domain and range were expressed as floats? Do you magically
transform all such functions to the domain of integers, evaluate
the transformed function and then transform the results back to
floats?

Do all your D/AC's output integral voltages? And, all your AD/C's
read in steps of 1V?


I'll always do something more like:

int N = (int)((MAX-START)/INCREMENT);
<Check that N makes sense>
for (i=0; i<N; i++) {
cal_point = START + i*INCREMENT;
...
}

So yes, I always map the domain on integers, though
not the range.

Jeroen Belleman

Tom Gardner
Guest

Wed Aug 31, 2016 2:37 pm   



On 31/08/16 02:00, John Larkin wrote:
Quote:
On Fri, 26 Aug 2016 16:40:35 +0100, Tom Gardner
spamjunk_at_blueyonder.co.uk> wrote:

On 26/08/16 16:22, John Larkin wrote:
On Fri, 26 Aug 2016 15:56:11 +0100, Tom Gardner
spamjunk_at_blueyonder.co.uk> wrote:

On 26/08/16 15:16, John Larkin wrote:
On Fri, 26 Aug 2016 08:06:00 +0100, Tom Gardner
spamjunk_at_blueyonder.co.uk> wrote:

On 26/08/16 04:50, John Larkin wrote:
It's amusing how many 8-week code camps there are these days, teaching
absolute beginners to code.

No, that's not amusing. Really, that's not amusing :(

The public (including educators and politicians) equate "tech" and
"coding". So we're teaching millions of kids to type some scripting
language and then declare them "savvy."

Gosh, not many of us will be able to design real electronics. Expect
the price to go up.

Price will go down because it will become too expensive
to develop in our countries.

I'm not seeing innovative (or competitive) electronic hardware design
coming out of China or India or Brazil. We have a little competition
in Europe, mostly UK and Germany, but not much.

... yet. Remember what was said about Japan in our youth!

First, it was "Japanese stuff is junk."


And there was truth to that.


Quote:
Then it was "The Japanese are going to kill us in semiconductors and
cars and scientific research."


And there is some truth to that, particularly if you consider
cars' market share.


Quote:
Now it's "The Japanese are getting old and have been stagnant for a
decade or two."


And that can also be applied to many parts of the 1st world.

There is beginning to be some interesting stuff coming from
China, if you can find it in amongst the dross and counterfeits.
Nothing revolutionary, as yet, but it would be unwise to bet
against that in the future.

Don Y
Guest

Wed Aug 31, 2016 2:39 pm   



On 8/31/2016 12:16 AM, Jeroen Belleman wrote:
Quote:
On 2016-08-31 01:28, Don Y wrote:
On 8/30/2016 1:02 PM, Jeroen Belleman wrote:
On 30/08/16 20:37, Don Y wrote:
On 8/30/2016 10:39 AM, John Larkin wrote:
Floats solve a lot of scaling dilemmas.

Floats are the PROBLEM!

In an instrument, it's really convenient to do all the math -
calibrations, display formatting, web pages, maybe polynomial
corrections, filtering, like that - as floats, then clip/scale for
DACs or whatever. It's really easier than explaining a zillion
fixed-point shift-and-scale operations to an embedded programmer.

It doesn't matter if he's still trying to apply them
incorrectly. A float still has a fixed size representation in
the machine. Add 1,000,000,000 to 0.00 000 001 and tell me
what you get. Or, subtract 1,234,567.00 from 1,234,568.00.

Those are not many things that many electronic instruments need to do
to PPB resolution. But an embeded programmer should be aware of the
properties of single and double floats. And of integers!

The only things that we do that need extreme dynamic range are
frequencies and time intervals. Some of our time-interval counters and
delay generators can handle 0 to 999 seconds with 1 ps resolution. We
generally represent that as 64 bits with a 1 ps LSB.

It comes into play in a surpring number of (varied!) places.

for (float cal_point = START; cal_point < MAX; cal_point +=
INCREMENT)

will yield different results than:

for (float cal_offset = 0; cal_offset < MAX-START; cal_offset +=
INCREMENT)
cal_point = START + cal_offset

if START >> INCREMENT.

(In some cases, the loop will never terminate!)

Anyone showing me code with either of these will get a free sermon
about the dangers of floating point numbers.

Really? You've *never* graphed an equation (evaluated a function)
where the domain and range were expressed as floats? Do you magically
transform all such functions to the domain of integers, evaluate
the transformed function and then transform the results back to
floats?

Do all your D/AC's output integral voltages? And, all your AD/C's
read in steps of 1V?


I'll always do something more like:

int N = (int)((MAX-START)/INCREMENT);
Check that N makes sense
for (i=0; i<N; i++) {
cal_point = START + i*INCREMENT;
...
}

So yes, I always map the domain on integers, though
not the range.


What's the calibration *value* you're mapping? Also
always integers? Never trying to linearize a response
between two endpoints?

How would I compute the length of a cubic bezier:
A(1-t)^3 + 3Bt(1-t)^2 + 3C(1-t)t^2 + Dt^3 for t in [0,1]
*without* resorting to iterating using some small increment
of a floating point value (in this case, t)?

(i.e., there is no closed form solution)

Don Y
Guest

Wed Aug 31, 2016 2:42 pm   



On 8/31/2016 12:16 AM, Jeroen Belleman wrote:

Quote:
It comes into play in a surpring number of (varied!) places.

for (float cal_point = START; cal_point < MAX; cal_point += INCREMENT)

will yield different results than:

for (float cal_offset = 0; cal_offset < MAX-START; cal_offset += INCREMENT)
cal_point = START + cal_offset

if START >> INCREMENT.

I'll always do something more like:

int N = (int)((MAX-START)/INCREMENT);
Check that N makes sense
for (i=0; i<N; i++) {
cal_point = START + i*INCREMENT;
...
}


[note that "i*INCREMENT" is the same as my cal_offset -- in terms of
retained precision in the float]

Quote:
So yes, I always map the domain on integers, though
not the range.


Jeroen Belleman
Guest

Wed Aug 31, 2016 4:58 pm   



On 2016-08-31 10:42, Don Y wrote:
Quote:
On 8/31/2016 12:16 AM, Jeroen Belleman wrote:

It comes into play in a surpring number of (varied!) places.

for (float cal_point = START; cal_point < MAX; cal_point +=
INCREMENT)

will yield different results than:

for (float cal_offset = 0; cal_offset < MAX-START; cal_offset +=
INCREMENT)
cal_point = START + cal_offset

if START >> INCREMENT.

I'll always do something more like:

int N = (int)((MAX-START)/INCREMENT);
Check that N makes sense
for (i=0; i<N; i++) {
cal_point = START + i*INCREMENT;
...
}

[note that "i*INCREMENT" is the same as my cal_offset -- in terms of
retained precision in the float]


That isn't true in binary floating point math! You lose log2 (n) bits
of precision in n iterations, one LSB every time you double the number
of additions.

Jeroen Belleman

Don Y
Guest

Wed Aug 31, 2016 6:28 pm   



On 8/31/2016 3:58 AM, Jeroen Belleman wrote:
Quote:
On 2016-08-31 10:42, Don Y wrote:
On 8/31/2016 12:16 AM, Jeroen Belleman wrote:

It comes into play in a surpring number of (varied!) places.

for (float cal_point = START; cal_point < MAX; cal_point +=
INCREMENT)

will yield different results than:

for (float cal_offset = 0; cal_offset < MAX-START; cal_offset +=
INCREMENT)
cal_point = START + cal_offset

if START >> INCREMENT.

I'll always do something more like:

int N = (int)((MAX-START)/INCREMENT);
Check that N makes sense
for (i=0; i<N; i++) {
cal_point = START + i*INCREMENT;
...
}

[note that "i*INCREMENT" is the same as my cal_offset -- in terms of
retained precision in the float]

That isn't true in binary floating point math! You lose log2 (n) bits
of precision in n iterations, one LSB every time you double the number
of additions.


You're still missing the point. Your approach *requires*
"INCREMENT" to be constant and invariant. I.e., you can, at best,
tie N points on a curve to an actual function at some degree of
accuracy. You have to rely on the function behaving predictably
(as defined a priori ahead of time).

You can't adjust your "fit" of the curve based on the characteristics
of the ACTUAL curve. You're just naively sampling it at fixed
intervals and hoping it "behaves" between those sample points.

Work the math of the cubic bezier example. *Pick* some number "N"
that represents how "well" you're going to "fit the curve".
Then, run varying sets of {A,B,C,D} through your fitter and gasp
at how poorly your "constant increment" based fitter works.
Your "fitter" has to naively approach the curve at fixed points,
regardless of the shape of the curve. For curves of low curvature,
it will work admirably. For curves of relatively constant curvature,
less accurate but still "fair". For curves with varying curvature,
loops or discontinuities, it will fare poorly.

The tightness of fit possible varies significantly with curve shape
and where on the particular curve you happen to be evaluating the
function.

E.g., when drawing families of hyperbolic curves, the curves
"flatten out" the farther you get from the focal points (i.e.,
as the curves approach their asymptotes). Trying to cover
a single curve with a fixed number of points leaves you
with large interpolative errors close to the focus and
redundant (essentially colinear) points as you move out
along the asymptotes. To *improve* the fit near the foci,
you end up dramatically increasing the number of points.
And, this just wastes *more* "fixed increment" points out
on the tails.

By contrast, being able to massage small "increments" to fit
the points that *need* fitting lets you control the overall
fit instead of just the fit in the proximity of N particular
points chosen (essentially) arbitrarily. Note that this goodness
of fit doesn't care if you lose a bit to rounding -- as long
as the value is reproducible (e.g., if I'm fitting a curve
at a point vs. a point+1ulp its still better than picking N
points (t values, in the case of the parameterized bezier)
independantly of the actual shape of the curve.

You want to be able to decide how closely you locate your
cal_points based on what the function that you are calibrating/plotting
is doing in that vicinity (typically, comparing closeness of fit
to some criteria and then doubling the frequency of points in that
vicinity; then, testing *their* fit and doubling, again, as necessary).

Jeroen Belleman
Guest

Wed Aug 31, 2016 7:00 pm   



On 2016-08-31 14:28, Don Y wrote:
Quote:
On 8/31/2016 3:58 AM, Jeroen Belleman wrote:
On 2016-08-31 10:42, Don Y wrote:
On 8/31/2016 12:16 AM, Jeroen Belleman wrote:

It comes into play in a surpring number of (varied!) places.

for (float cal_point = START; cal_point < MAX; cal_point +=
INCREMENT)

will yield different results than:

for (float cal_offset = 0; cal_offset < MAX-START; cal_offset +=
INCREMENT)
cal_point = START + cal_offset

if START >> INCREMENT.

I'll always do something more like:

int N = (int)((MAX-START)/INCREMENT);
Check that N makes sense
for (i=0; i<N; i++) {
cal_point = START + i*INCREMENT;
...
}

[note that "i*INCREMENT" is the same as my cal_offset -- in terms of
retained precision in the float]

That isn't true in binary floating point math! You lose log2 (n) bits
of precision in n iterations, one LSB every time you double the number
of additions.

You're still missing the point. Your approach *requires*
"INCREMENT" to be constant and invariant. [snip!]


Well, yes. Methinks you're moving the goal posts. If
you re-evaluate INCREMENT on the fly, this whole argument
is moot and off hand, I see no other way than to do it
your way.

Jeroen Belleman

Don Y
Guest

Thu Sep 01, 2016 12:56 am   



On 8/31/2016 6:00 AM, Jeroen Belleman wrote:
Quote:
On 2016-08-31 14:28, Don Y wrote:
On 8/31/2016 3:58 AM, Jeroen Belleman wrote:
On 2016-08-31 10:42, Don Y wrote:
On 8/31/2016 12:16 AM, Jeroen Belleman wrote:

It comes into play in a surpring number of (varied!) places.

for (float cal_point = START; cal_point < MAX; cal_point +=
INCREMENT)

will yield different results than:

for (float cal_offset = 0; cal_offset < MAX-START; cal_offset +=
INCREMENT)
cal_point = START + cal_offset

if START >> INCREMENT.

I'll always do something more like:

int N = (int)((MAX-START)/INCREMENT);
Check that N makes sense
for (i=0; i<N; i++) {
cal_point = START + i*INCREMENT;
...
}

[note that "i*INCREMENT" is the same as my cal_offset -- in terms of
retained precision in the float]

That isn't true in binary floating point math! You lose log2 (n) bits
of precision in n iterations, one LSB every time you double the number
of additions.

You're still missing the point. Your approach *requires*
"INCREMENT" to be constant and invariant. [snip!]

Well, yes. Methinks you're moving the goal posts. If
you re-evaluate INCREMENT on the fly, this whole argument
is moot and off hand, I see no other way than to do it
your way.


It's just a different example of "adding small increments to an
otherwise large number" (the initial example that I presented).
The idea of iterating or incrementing a float was met with
derision:

"Anyone showing me code with either of these will get a free
sermon about the dangers of floating point numbers."

"It would be, ahem, unusual to use a float as a loop counter.
I've never done it, and I don't think I've ever seen it done
for real."

As I indicated:

"It comes into play in a surprising number of (varied!) places."

*Hoping* that folks would be able to make the mental leap to
specific cases.

The specific case I had in mind was an application that draws lines
of constant time-diference in a LORAN-C plotter:

// two floating point iterators/iterations
td_variable[MIDPOINT] := td_variable[CURRENT] + offset;
td_variable[GOAL] := td_variable[MIDPOINT] + offset;

// plot a point some distance from our present position on
// curve of constant time-difference (~1 second to evaluate)
lat_lon[GOAL] := TDtoLL(td_constant, td_variable[GOAL]);
coordinates[GOAL] := map_projection(lat_lon[GOAL]);

while (FOREVER) {
// yet another fractional second
lat_lon[MIDPOINT] := TDtoLL(td_constant, td_variable[MIDPOINT]);
coordinates[MIDPOINT] := map_projection(lat_lon[MIDPOINT]);

// linear approximation of curve from CURRENT to GOAL
chord := (coordinates[CURRENT], coordinates[GOAL]);

// evaluate tightness of fit to actual curve *at* the "midpoint"
deviation := distance(chord, coordinates[MIDPOINT]);

// if (deviation < tolerable) {
// we want to draw the linear approximation/chord:
// draw( coordinates[CURRENT], coordinates[GOAL] )
// but this, instead, is better and we've already *paid*
// for the results:
draw( coordinates[CURRENT], coordinates[MIDPOINT] );
draw( coordinates[MIDPOINT], coordinates[GOAL] );
td_variable[CURRENT] = td_variable[GOAL];
offset = 2.0 * offset;

// hopefully we're moving out along the tail of the curve!
// take bigger bites lest we spend forever creeping along!
td_variable[MIDPOINT] := td_variable[CURRENT] + offset;
td_variable[GOAL] := td_variable[MIDPOINT] + offset;
} else {
// the chord exhibits too much deviation. Set sights on
// a shorter segment -- say, *half* as far as original goal
offset = offset / 2;

// compute new goal at current + 2*offset...
// oh, wait! we've already got that as our OLD midpoint
// cuz our old offset was 2 * our new offset!
// just reuse them and save that second of number crunching!!
td_variable[GOAL] = td_variable[MIDPOINT];
// lat_lon[GOAL] = lat_lon[MIDPOINT]; -- not needed
coordinates[GOAL] = coordinates[MIDPOINT];

// find "midpoint" of our newest hopeful chord
td_variable[MIDPOINT] := td_variable[CURRENT] + offset;
} // one more time!
}

Note that there are lots of opportunities for roundoff errors.
But, the resulting points plotted still remain in the domain of
the "line of constant time difference" (td_constant never changes).
Furthermore, we don't "sweat" how exactly we "find the midpoint"
because it's a nonlinear system in which we're working; we just want
some point "in the middle, somewhere" at which to compute a
deviation-- if we're off by lots of ulp's... <shrug>

Nowadays, you could do much better -- faster processor (so the
user wouldn't have to wait a second or more for EACH point on
the curve to be plotted), more memory (so you could save the
data pertinent to the old "goal": even though your current
linear approximation couldn't use it, sooner or later you
will pass through that point!), etc.

I've elided some optimizations that make the code harder to
follow (mainly, not trying to double offset as quickly to
help in cases when you are plotting *into* an area of
INCREASING curvature -- takes a boolean/bit to implement).

[Only had 256 bytes of RAM to work with, TOTAL for the product
(shared with other activities happening concurrently in the device)
As such, a single TD eats 4 for each coordinate, a LL eats 4 more
for each coordinate, etc. This is what we'd consider a "pig"
35 years ago! :> ]

Wanna tell me how I could have mapped these iterations onto
integers?

Don Y
Guest

Thu Sep 01, 2016 1:03 am   



On 8/31/2016 11:56 AM, Don Y wrote:
Quote:
// two floating point iterators/iterations
td_variable[MIDPOINT] := td_variable[CURRENT] + offset;
td_variable[GOAL] := td_variable[MIDPOINT] + offset;

// plot a point some distance from our present position on
// curve of constant time-difference (~1 second to evaluate)
lat_lon[GOAL] := TDtoLL(td_constant, td_variable[GOAL]);
coordinates[GOAL] := map_projection(lat_lon[GOAL]);

while (FOREVER) {
// yet another fractional second
lat_lon[MIDPOINT] := TDtoLL(td_constant, td_variable[MIDPOINT]);
coordinates[MIDPOINT] := map_projection(lat_lon[MIDPOINT]);

// linear approximation of curve from CURRENT to GOAL
chord := (coordinates[CURRENT], coordinates[GOAL]);

// evaluate tightness of fit to actual curve *at* the "midpoint"
deviation := distance(chord, coordinates[MIDPOINT]);

// if (deviation < tolerable) {
// we want to draw the linear approximation/chord:
// draw( coordinates[CURRENT], coordinates[GOAL] )
// but this, instead, is better and we've already *paid*
// for the results:
draw( coordinates[CURRENT], coordinates[MIDPOINT] );
draw( coordinates[MIDPOINT], coordinates[GOAL] );
td_variable[CURRENT] = td_variable[GOAL];
offset = 2.0 * offset;

// hopefully we're moving out along the tail of the curve!
// take bigger bites lest we spend forever creeping along!
td_variable[MIDPOINT] := td_variable[CURRENT] + offset;
td_variable[GOAL] := td_variable[MIDPOINT] + offset;


// find next goal
lat_lon[GOAL] := TDtoLL(td_constant, td_variable[GOAL]);
coordinates[GOAL] := map_projection(lat_lon[GOAL]);

Quote:
} else {
// the chord exhibits too much deviation. Set sights on
// a shorter segment -- say, *half* as far as original goal
offset = offset / 2;

// compute new goal at current + 2*offset...
// oh, wait! we've already got that as our OLD midpoint
// cuz our old offset was 2 * our new offset!
// just reuse them and save that second of number crunching!!
td_variable[GOAL] = td_variable[MIDPOINT];
// lat_lon[GOAL] = lat_lon[MIDPOINT]; -- not needed
coordinates[GOAL] = coordinates[MIDPOINT];

// find "midpoint" of our newest hopeful chord
td_variable[MIDPOINT] := td_variable[CURRENT] + offset;
} // one more time!
}


I'll let others see if there are any nits remaining to pick...

John Larkin
Guest

Thu Sep 01, 2016 4:52 am   



On Sat, 27 Aug 2016 19:37:13 +0100, Tom Gardner
<spamjunk_at_blueyonder.co.uk> wrote:

Quote:
On 27/08/16 17:58, John Larkin wrote:
It is interesting that HDLs spin fewer bugs per line than procedural
languages like c. Even describing hardware in a language is more
reliable that describing software in a language.

C is not an example of a good procedural language.
That's exemplified by the /second/ C book being the
"C Puzzle Book".

Back then, in the early 80s, processors/caches/memories
and the C language were /much/ simpler. Since then the
complexity of all those things has grown, and with it
the probability of gotchas manifesting themselves.

And apart from that, eometimes I wonder how (or if)
a typical programmer manages to get all the compiler
and linker arguments simultaneously correct.


FPGA logic is usually a collection of synchronous state machines;
there's one clock and everything changes state at the clock edge.
Procedural code wanders all over the place, and the program state is
usually unknowable.

The big hazard in FPGA design is synchronous state machine violations,
namely async inputs and crossing clock domains.

There are lots of professional programmers who don't know what a state
machine is. I've shown a few.


--

John Larkin Highland Technology, Inc

lunatic fringe electronics

John Larkin
Guest

Thu Sep 01, 2016 5:05 am   



On Sat, 27 Aug 2016 18:35:38 -0400, krw <krw_at_somewhere.com> wrote:

Quote:
On Sat, 27 Aug 2016 09:58:36 -0700, John Larkin
jjlarkin_at_highlandtechnology.com> wrote:

On Fri, 26 Aug 2016 22:41:26 -0400, krw <krw_at_somewhere.com> wrote:

On Fri, 26 Aug 2016 06:53:04 -0700, John Larkin
jjlarkin_at_highlandtechnology.com> wrote:

On Fri, 26 Aug 2016 08:26:50 +0100, Tom Gardner
spamjunk_at_blueyonder.co.uk> wrote:

On 26/08/16 05:33, Clifford Heath wrote:
Not in my world. Talk about *not* using test-first development
and you'll simply get walked to the door. It's that central,
almost nothing else is quite so sacrosanct. It's less common
in old-school IT, but the new generation has mostly adopted
it with religious fervor.

TDD is necessary but not sufficient to ensure a good
product.

I've seen too many idiots think that because something
worked because it passed all the tests - the green light
syndrome.

You can't test quality into code. There are more potential bugs than
anyone can imagine.

People tend to not test the things that they are unconsciously
uncertain about. That happens in hardware and software, which is why
nobody should test their work by themselves.

Hardware is usually easier to test, because the stressors, like
voltage ranges and temperature, are simpler.

Not going to agree with you there! For example, it takes some serious
testing to verify (or production test) a microprocessor. The
environment is the easy part.

Well, when you test a uP, you're practically testing code.

Do you consider FPGAs code? The point is that anything sufficiently
complicated takes a lot more testing than a hair drier and variable
power supply.


An FPGA is a silicon IC. They are generally programmed with a binary
configuration file that is produced by compiling VHDL or Verilog or
some such. So as a practical matter, an FPGA executes a programming
language.

But FPGAs have far fewer bugs than uPs running procedural languages.
VHDL and such are not procedural languages, they are hardware
description languages. A FOR loop in VHDL creates N instances of a
hardware structure, all of which will ultimately execute
simultaneously.

Our FPGA boys and girls spend as much time coding test benches as they
spend coding the applications. By the time a board is fired up, it
usually works.


Quote:

But the stress space for hardware testing is only a few dimensions:
temperature, clock rate, supply voltages. WHAT is being tested is
mostly the instruction set, which is really software, even in a RISC
machine. The guts of a uP is defined in VHDL or some such, typed text,
and sometimes microcode with some sort of assembler.

Sometimes VHDL, though I know several that were designed using flow
charts. Wink


We often flow chart FPGA (and uP) logic on a whiteboard, so several
people can think through the issues together.

Quote:

Not everything is compiled (synthesized), though. A lot of the more
complicated stuff, the VHDL is just a netlist. Not much different
than a schematic.


I've done really complex FPGA stuff using schematic entry. I sort of
miss it.

Quote:

It is interesting that HDLs spin fewer bugs per line than procedural
languages like c. Even describing hardware in a language is more
reliable that describing software in a language.

Strong typing helps.


And parallel processing state machines with one common clock.


--

John Larkin Highland Technology, Inc

lunatic fringe electronics

Tom Gardner
Guest

Thu Sep 01, 2016 5:13 am   



On 31/08/16 23:52, John Larkin wrote:
Quote:
There are lots of professional programmers who don't know what a state
machine is.


Including some programmers that are actually creating FSMs
for telecom systems - but don't realise it, doh!

Yes, they thought an FSM was something you used when parsing
input during compilation, sigh.


> I've shown a few.

Ditto.

John Larkin
Guest

Thu Sep 01, 2016 5:15 am   



On Sat, 27 Aug 2016 09:36:25 -0700, Don Y
<blockedofcourse_at_foo.invalid> wrote:

Quote:
On 8/26/2016 9:12 AM, John Larkin wrote:
On Fri, 26 Aug 2016 16:41:59 +0100, Stephen Wolstenholme
steve_at_easynn.com> wrote:

On Fri, 26 Aug 2016 08:30:00 -0700, John Larkin
jjlarkin_at_highlandtechnology.com> wrote:

But most code looks like that. Most programmers are outright hostile
to comments, as is obvious here. The argument, seen here, is "since
I'll get the comments wrong, we shouldn't have them."

When I write a program I write the comments first. Then it's clear to
me what I want to do with the program.

Steve

Yes. When I design hardware, I write the manual first.

This coming week, I'm going to write the manual/design notes for a new
product that's still in the architecture and rough schematic stage.
We'll later strip that to be the public manual, and keep/maintain the
design notes part.

The advantage of writing a user manual first (or, in my case, having
evolved that to a "tutorial" document) is that it lets you sit back
and look at how "clean" your design is -- or is not.

As the document must be *thorough* and cover every potentiality,
if you find it littered with lots of caveats ("... unless FOO, in
which case, BAR...") then its a red flag that you probably need
to rethink your approach: it's either inherently flawed OR will
be troublesome for the user due to its complexity (KISS).

E.g., many "programmers" like to FORCE the user to do things in a
particular order -- even when there is no underlying reason for
doing so. Or, worse, because it makes the coders job easier.

I, instead, like to give the user free-reign in as much of the UX
as possible. And, only enforce requirements when they MUST be.
E.g., when you try to advance to the next logical step.

Web coders seem to be notorious for the "you will do it THIS way"
mentality; why do you need my name and address just to give me
a price? Or, shipping information? Ask for a ZIP code (you
probably won't give a more refined price even if you asked for
ZIP+4!) and give me a ballpark estimate.


Why do pull-down country lists often start with Afghanistan?

I usually pick Afghanistan instead of scrolling down near the end to
USA.


--

John Larkin Highland Technology, Inc

lunatic fringe electronics

Tom Gardner
Guest

Thu Sep 01, 2016 5:17 am   



On 01/09/16 00:05, John Larkin wrote:
Quote:
Our FPGA boys and girls spend as much time coding test benches as they
spend coding the applications. By the time a board is fired up, it
usually works.


Softies are triumphantly reinventing that concept. They call
it TDD, test driven design.

Now all we have to do is get them to realise that just because
it passes their tests doesn't mean it actually works.

Softies are also triumphantly reinventing schematics, but they
usually stop at hand-crafting the netlist in an incomprehensible
proprietary language expressed in XML.

Me a cynic? Shurely shome mishtake.

Don Y
Guest

Thu Sep 01, 2016 5:26 am   



On 8/31/2016 4:15 PM, John Larkin wrote:
Quote:
Web coders seem to be notorious for the "you will do it THIS way"
mentality; why do you need my name and address just to give me
a price? Or, shipping information? Ask for a ZIP code (you
probably won't give a more refined price even if you asked for
ZIP+4!) and give me a ballpark estimate.

Why do pull-down country lists often start with Afghanistan?

I usually pick Afghanistan instead of scrolling down near the end to
USA.


Because the "coder" did what was convenient for *him* instead
of thinking about the *user*.

Same reason you'll see an appointment listed as scheduled for
"Thursday, Sep 1" instead of "Tomorrow, Thursday, September 1st".

Or, order confirmations like "There are 1 items in your order"
or "There are 1 people (persons) traveling".

In my disdain for the "coder", pick whatever is easiest for *me*
if there are no consequences to doing so. "OK, I'll just remember
that I told them I was born on Jan 1st when I registered -- *if*
they ever ask!" (as I'll have to make a record of the registration
data, regardless -- you don't think I'm *really* going to tell
them my mother's maiden name or where I went to school?! "Maiden"
and "School", respectively! :-/ )

Goto page Previous  1, 2, 3 ... 15, 16, 17, 18, 19  Next

elektroda.net NewsGroups Forum Index - Electronics Design - Windows rename

Ask a question - edaboard.com

Arabic versionBulgarian versionCatalan versionCzech versionDanish versionGerman versionGreek versionEnglish versionSpanish versionFinnish versionFrench versionHindi versionCroatian versionIndonesian versionItalian versionHebrew versionJapanese versionKorean versionLithuanian versionLatvian versionDutch versionNorwegian versionPolish versionPortuguese versionRomanian versionRussian versionSlovak versionSlovenian versionSerbian versionSwedish versionTagalog versionUkrainian versionVietnamese versionChinese version
RTV map EDAboard.com map News map EDAboard.eu map EDAboard.de map EDAboard.co.uk map