EDAboard.com | EDAboard.eu | EDAboard.de | EDAboard.co.uk | RTV forum PL | NewsGroups PL

Windows rename

Ask a question - edaboard.com

elektroda.net NewsGroups Forum Index - Electronics Design - Windows rename

Goto page Previous  1, 2, 3 ... 16, 17, 18, 19  Next

Don Y
Guest

Thu Sep 01, 2016 6:37 am   



On 8/31/2016 3:52 PM, John Larkin wrote:
Quote:
On Sat, 27 Aug 2016 19:37:13 +0100, Tom Gardner
spamjunk_at_blueyonder.co.uk> wrote:

On 27/08/16 17:58, John Larkin wrote:
It is interesting that HDLs spin fewer bugs per line than procedural
languages like c. Even describing hardware in a language is more
reliable that describing software in a language.

C is not an example of a good procedural language.
That's exemplified by the /second/ C book being the
"C Puzzle Book".

Back then, in the early 80s, processors/caches/memories
and the C language were /much/ simpler. Since then the
complexity of all those things has grown, and with it
the probability of gotchas manifesting themselves.

And apart from that, eometimes I wonder how (or if)
a typical programmer manages to get all the compiler
and linker arguments simultaneously correct.

FPGA logic is usually a collection of synchronous state machines;
there's one clock and everything changes state at the clock edge.
Procedural code wanders all over the place, and the program state is
usually unknowable.


Software is inherently serial; it takes a significant, focused effort
to arrange for more than one thing to happen in lock-step.

Hardware tends to be more parallel; it takes extra effort to *stagger*
actions.

When I design processors, often the trickiest part is serializing
actions that *must* be serialized. E.g., can't let instruction
#3 proceed until #1 has finished -- despite the fact that #3 is
effectively "complete" (except for the blessing on the doorstep).

OTOH, instruction #2 can -- and *has* -- completed regardless of
instruction #1 still pending.

These are some of the hardest things to test for as you have no idea
what mix of instructions you will encounter, how quickly they will
appear at the execution unit, etc.

The comparable issue in writing software is dealing with asynchronous
event handling -- not knowing when "this" can happen; yet having to
be ready to handle it when it does (or, able to defer it until you
*can* handle it -- but, that means you may have to queue up still
other operations behind it; things that you may not have planned on
having to do!)

Quote:
The big hazard in FPGA design is synchronous state machine violations,
namely async inputs and crossing clock domains.


The same is true in software. You just can't *see* them as visually.
"What if THIS result appears after this other computation expects
it to have been completed? How will I know to wait? That the
result that *appears* waiting is actually stale data associated with
the last "cycle" (aka clock)?

What if a resource I *need* isn't currently available? Do I wait?
If so, does that mean other entities will end up waiting on the
resources that they expect *me* to provide? Who are those
entities? Is there a way I can identify them (at design time)?
Or, notify them (at run time)?

Much of my current design deals with addressing these sorts of
things as they are the "hazards" and "races" that will trip-up
subsequent developers. How do I put mechanisms in place so that
others won't have to be consciously aware of these possibilities
AT EVERY STEP in their development?

(All you have to do -- in a FPGA -- is identify the clocks and
the data sampled by each)

Quote:
There are lots of professional programmers who don't know what a state
machine is. I've shown a few.


I think you'd be surprised at how much software relies on FSM's.
I know of firms who based their entire product offerings on
FSM "interpreters" to provide program structure back more than
40 years.

With software FSM's, you can do all sorts of clever things you
probably wouldn't think of implementing in a hardware FSM
(like unwinding "state history" based on some future action).

krw
Guest

Thu Sep 01, 2016 7:21 am   



On Wed, 31 Aug 2016 16:05:35 -0700, John Larkin
<jjlarkin_at_highlandtechnology.com> wrote:

Quote:
On Sat, 27 Aug 2016 18:35:38 -0400, krw <krw_at_somewhere.com> wrote:

On Sat, 27 Aug 2016 09:58:36 -0700, John Larkin
jjlarkin_at_highlandtechnology.com> wrote:

On Fri, 26 Aug 2016 22:41:26 -0400, krw <krw_at_somewhere.com> wrote:

On Fri, 26 Aug 2016 06:53:04 -0700, John Larkin
jjlarkin_at_highlandtechnology.com> wrote:

On Fri, 26 Aug 2016 08:26:50 +0100, Tom Gardner
spamjunk_at_blueyonder.co.uk> wrote:

On 26/08/16 05:33, Clifford Heath wrote:
Not in my world. Talk about *not* using test-first development
and you'll simply get walked to the door. It's that central,
almost nothing else is quite so sacrosanct. It's less common
in old-school IT, but the new generation has mostly adopted
it with religious fervor.

TDD is necessary but not sufficient to ensure a good
product.

I've seen too many idiots think that because something
worked because it passed all the tests - the green light
syndrome.

You can't test quality into code. There are more potential bugs than
anyone can imagine.

People tend to not test the things that they are unconsciously
uncertain about. That happens in hardware and software, which is why
nobody should test their work by themselves.

Hardware is usually easier to test, because the stressors, like
voltage ranges and temperature, are simpler.

Not going to agree with you there! For example, it takes some serious
testing to verify (or production test) a microprocessor. The
environment is the easy part.

Well, when you test a uP, you're practically testing code.

Do you consider FPGAs code? The point is that anything sufficiently
complicated takes a lot more testing than a hair drier and variable
power supply.

An FPGA is a silicon IC. They are generally programmed with a binary
configuration file that is produced by compiling VHDL or Verilog or
some such. So as a practical matter, an FPGA executes a programming
language.


Not at all. An FPGA "executes" hardware. The difference between an
FPGA and an ASIC is that the FPGA is configurable and the ASIC fixed
at (before) manufacturing. Both are (usually) designed with some high
level language.

Quote:
But FPGAs have far fewer bugs than uPs running procedural languages.
VHDL and such are not procedural languages, they are hardware
description languages. A FOR loop in VHDL creates N instances of a
hardware structure, all of which will ultimately execute
simultaneously.


I think the first part depends on the complexity. One designs a
testbench for all but the most trivial designs and it's not unknown to
find "bugs" in the "code" and even EC the production hardware. Nothing
is different in the ASIC world. I'm sure you've heard the term
"errata".

Quote:
Our FPGA boys and girls spend as much time coding test benches as they
spend coding the applications. By the time a board is fired up, it
usually works.


Sure but fixing bugs is pretty much like debugging code. The
difference is that you can do it before you have hardware.
Quote:


But the stress space for hardware testing is only a few dimensions:
temperature, clock rate, supply voltages. WHAT is being tested is
mostly the instruction set, which is really software, even in a RISC
machine. The guts of a uP is defined in VHDL or some such, typed text,
and sometimes microcode with some sort of assembler.

Sometimes VHDL, though I know several that were designed using flow
charts. ;-)

We often flow chart FPGA (and uP) logic on a whiteboard, so several
people can think through the issues together.


I meant that the design language was the flow chart. That's how
mainframes were done, before VHDL.

Quote:
Not everything is compiled (synthesized), though. A lot of the more
complicated stuff, the VHDL is just a netlist. Not much different
than a schematic.

I've done really complex FPGA stuff using schematic entry. I sort of
miss it.


Not schematic entry. Direct instantiation with a 1:1 correlation
between the VHDL and the netlist.

Quote:

It is interesting that HDLs spin fewer bugs per line than procedural
languages like c. Even describing hardware in a language is more
reliable that describing software in a language.

Strong typing helps.

And parallel processing state machines with one common clock.


No argument. I think there is some difference in expectation too.

Tom Gardner
Guest

Thu Sep 01, 2016 3:00 pm   



On 01/09/16 01:37, Don Y wrote:
Quote:
Software is inherently serial; it takes a significant, focused effort
to arrange for more than one thing to happen in lock-step.


That depends on the scale at which you are considering things.

Consider, for example, a telecom system where there are many
line events and billing events, occurring simultaneously
and asynchronously for each individual call and parts of a
call, as well as all calls currently in existence.

Those events have to be serialised only within a call, not
between calls. That's directly analogous to synchronising
an asynchronous hardware signal with a clock before
processing it in a synchronous FSM.

Don Y
Guest

Thu Sep 01, 2016 11:35 pm   



On 9/1/2016 2:00 AM, Tom Gardner wrote:
Quote:
On 01/09/16 01:37, Don Y wrote:
Software is inherently serial; it takes a significant, focused effort
to arrange for more than one thing to happen in lock-step.

That depends on the scale at which you are considering things.

Consider, for example, a telecom system where there are many
line events and billing events, occurring simultaneously
and asynchronously for each individual call and parts of a
call, as well as all calls currently in existence.

Those events have to be serialised only within a call, not
between calls. That's directly analogous to synchronising
an asynchronous hardware signal with a clock before
processing it in a synchronous FSM.


Duality suggests there will be 1:1 correspondences between similar
issues in each domain.

But, people *think* about software serially: do this, then do that,
then do this other thing. It's the nature of the beast. It's why
parallel processing is "difficult" for most folks to wrap their
heads around.

OTOH, the operative word when designing hardware is "while": WHILE
this is happening, then this OTHER thing should also happen, etc.

E.g., designing a doorbell you would design N circuits in parallel:
front, side, back doors. The idea of "polling" the circuits
individually would complicate the design (e.g., if you had to
limit the peak power being consumed IN the "doorbell system",
it would call for a more complex design).

By contrast, a software implementation would inherently look at
the individual "bell requests" in some order (even if they were
generated as truly asynchronous "events"/IRQ's). Attempting to
handle all of th4em *concurrently* would complicate the design.

I designed a graphics controller that had three i/f's:
- the front end (that ultimately talked to the "data source"
- the back end (that talked to the marking engine)
- the host i/f (that talked to the controlling CPU)

For performance, the device managed an elastic store between the
front and back ends -- so the data source could be presenting data
while the marking engine was being fed PREVIOUSLY presented data.
So, an "in pointer" and an "out pointer" to a circular buffer.

Here, the pointers are physical registers and the buffer a
RAM device. The software analog would be two "variables"
(pointers or indices) and a "memory block/array".

Obvious status information is:
- is buffer empty? (marking engine is starved)
- is buffer full? (data source must be held off)
- how much in buffer? (CPU can throttle marking engine speed)

As data can be coming in and out of buffer "simultaneously"
in each case, you need a means of taking a snapshot of the
state so the snapshot always reflects the *actual* state
of the FIFO.

Software would have to deliberately force a critical region, examine
in and out pointers and report their difference (using modular
arithmetic). I.e., it has to do something "special" to handle
things in true parallel fashion (atomic read of *both* values)

In hardware, I could choose to:
- implement a small, combinatorial adder (subtracter)
- implement a separate U/D counter (UP each time new data arrives;
DN each time old data is consumed) and just ensure it is cleared
when the FIFO is forcibly RESET.
In each case, things are happening at the same time (it would
require extra effort to "compute the status" at some deliberate
time *later*)

The U/D counter option would rarely be used in software as it
doesn't buy anything; the in and out pointers are still required
(though one can be synthesized from the other in concert with the
U/D counter). In hardware, it's just the "adder" repackaged.

Tom Gardner
Guest

Fri Sep 02, 2016 3:03 am   



On 01/09/16 18:35, Don Y wrote:
Quote:
On 9/1/2016 2:00 AM, Tom Gardner wrote:
On 01/09/16 01:37, Don Y wrote:
Software is inherently serial; it takes a significant, focused effort
to arrange for more than one thing to happen in lock-step.

That depends on the scale at which you are considering things.

Consider, for example, a telecom system where there are many
line events and billing events, occurring simultaneously
and asynchronously for each individual call and parts of a
call, as well as all calls currently in existence.

Those events have to be serialised only within a call, not
between calls. That's directly analogous to synchronising
an asynchronous hardware signal with a clock before
processing it in a synchronous FSM.

Duality suggests there will be 1:1 correspondences between similar
issues in each domain.


No. Phone calls aren't simple anymore; they can be
highly structured and there is no guarantee about the
relative arrival of events.

That's inherent in any distributed system, and visible
in spades in telecom systems.


Quote:
By contrast, a software implementation would inherently look at
the individual "bell requests" in some order (even if they were
generated as truly asynchronous "events"/IRQ's). Attempting to
handle all of th4em *concurrently* would complicate the design.


You can't do that in telecom systems, partly due to the
nature of the problem, and partly because of the
requirement to maximise throughput.

Your subsequent example may have worked for you, but it
wouldn't work in a telecom billing system.

Consider a simple example of a pre-paid voice call. When
the credit runs out, the call must be terminated in (soft)
realtime.

Now consider that the voice call also has a multimedia
component, e.g. video or a tune or whatever. The multimedia
events will originate from different equipment made by a
different company, operated by a different company and
located who knows where. Those events /will/ be asynchronous
w.r.t. the voice call events. Nonetheless they must also
deplete the credit in such a way that all calls are chopped
when the credit drops to zero.

Good luck trying to think and design such a system in a
sequential fashion. Obviously it can and is done, but
it sure as hell isn't a simple serial set of events.

If you want to start to understand it, google for the
"reactor" and "half-async half-sync" design patterns.
There's no magic involved!

Summary: your example is too simplistic for many
similar real-world systems.

Don Y
Guest

Fri Sep 02, 2016 3:06 am   



On 9/1/2016 2:03 PM, Tom Gardner wrote:
Quote:
Summary: your example is too simplistic for many
similar real-world systems.


My example wasn't trying to address "many similar real-world
systems". Nor does yours address ALL real-world systems. I
guess, by your logic, YOUR example would be equally inappropriate?

krw
Guest

Fri Sep 02, 2016 6:43 am   



On Thu, 1 Sep 2016 10:00:07 +0100, Tom Gardner
<spamjunk_at_blueyonder.co.uk> wrote:

Quote:
On 01/09/16 01:37, Don Y wrote:
Software is inherently serial; it takes a significant, focused effort
to arrange for more than one thing to happen in lock-step.

That depends on the scale at which you are considering things.

Consider, for example, a telecom system where there are many
line events and billing events, occurring simultaneously
and asynchronously for each individual call and parts of a
call, as well as all calls currently in existence.

Those events have to be serialised only within a call, not
between calls. That's directly analogous to synchronising
an asynchronous hardware signal with a clock before
processing it in a synchronous FSM.


An even simpler example is the logic simulator itself.

Martin Brown
Guest

Sat Sep 03, 2016 12:39 am   



On 30/08/2016 14:10, Phil Hobbs wrote:
Quote:
Fortunately, 64-bit and 128-bit floats exist, and for
an accumulator of a long sum that is very desirable, even if the end
result is converted back to single precision.

That's a crutch, though--try computing, say, J0(1000) from the Maclaurin series,
and you'll blow through 128 bits in a big hurry. It needs a different
algorithm.


Indeed. On the flip side though being able to specify 80 bit reals as
accumulators for dot products rather than having to pray that the
optimiser will keep a heavily used value on the stack is advantageous.

Quote:
Also, those facilities very often don't get exposed via high level languages IME,
so you have to trawl through assembly code to see what the compiler did.


10 byte reals were available on PC compilers at one time but they were
withdrawn in later versions. Comparatively few compilers allow them now.

ISTR MickeySoft withdrew 10byte tempreals around MSVC v6.

Regards,
Martin Brown

M Philbrook
Guest

Sat Sep 03, 2016 5:01 am   



In article <nqch0j$p91$3_at_gioia.aioe.org>,
|||newspam|||@nezumi.demon.co.uk says...
Quote:

On 30/08/2016 14:10, Phil Hobbs wrote:
Fortunately, 64-bit and 128-bit floats exist, and for
an accumulator of a long sum that is very desirable, even if the end
result is converted back to single precision.

That's a crutch, though--try computing, say, J0(1000) from the Maclaurin series,
and you'll blow through 128 bits in a big hurry. It needs a different
algorithm.


Indeed. On the flip side though being able to specify 80 bit reals as
accumulators for dot products rather than having to pray that the
optimiser will keep a heavily used value on the stack is advantageous.

Also, those facilities very often don't get exposed via high level languages IME,
so you have to trawl through assembly code to see what the compiler did.

10 byte reals were available on PC compilers at one time but they were
withdrawn in later versions. Comparatively few compilers allow them now.

ISTR MickeySoft withdrew 10byte tempreals around MSVC v6.

Regards,
Martin Brown


they are still there.

win64 uses MM registers for any floating point calls to linbraries and
most likely their tools within the apps.

So a degrade down to 64 has been done, just to keep it in a 64 bit
register.

The FPU is still there and you can use the 80 bit float if you need it,
the compiler needs to know how of course or you can do inline Asm or
call in some lib that has it.

Jamie

Martin Brown
Guest

Sat Sep 03, 2016 3:21 pm   



On 03/09/2016 00:01, M Philbrook wrote:
Quote:
In article <nqch0j$p91$3_at_gioia.aioe.org>,
|||newspam|||@nezumi.demon.co.uk says...


Indeed. On the flip side though being able to specify 80 bit reals as
accumulators for dot products rather than having to pray that the
optimiser will keep a heavily used value on the stack is advantageous.

Also, those facilities very often don't get exposed via high level languages IME,
so you have to trawl through assembly code to see what the compiler did.

10 byte reals were available on PC compilers at one time but they were
withdrawn in later versions. Comparatively few compilers allow them now.

ISTR MickeySoft withdrew 10byte tempreals around MSVC v6.

they are still there.

win64 uses MM registers for any floating point calls to linbraries and
most likely their tools within the apps.

So a degrade down to 64 has been done, just to keep it in a 64 bit
register.


It was done more for marketing requirements and because MickeySoft
engineers don't understand numerical algorithms. Excels polynomial fit
can barely manage to fit a cubic reliably for awkward datasets (although
the polynomial fit in the charting is slightly better).

Quote:
The FPU is still there and you can use the 80 bit float if you need it,
the compiler needs to know how of course or you can do inline Asm or
call in some lib that has it.


Obviously but the compiler support for declaring variables that can be
used in high level language expressions has been withdrawn making access
to them a lot more difficult (but not impossible).

Regards,
Martin Brown

John Larkin
Guest

Sun Sep 04, 2016 11:28 pm   



On Fri, 26 Aug 2016 12:31:21 -0700, Don Y
<blockedofcourse_at_foo.invalid> wrote:

Quote:
On 8/26/2016 7:13 AM, John Larkin wrote:
On Thu, 25 Aug 2016 21:47:51 -0700, Don Y
blockedofcourse_at_foo.invalid> wrote:

On 8/25/2016 8:50 PM, John Larkin wrote:
Did you have any idea how large
the program would be "on disk" -- let alone "in memory" -- BEFORE you
started to write it?

My PC has gigabytes of ram! The EXE file is 16 Kbytes, typical for a
small program. PB has a #BLOAT metacommand that will make EXE files
arbitrarily bigger if you want to impress people.

My first product ran in 12KB of code, 256 bytes of RAM.
The *stack* in your PB program wouldn't fit in that hardware!

I doubt that a 16 kbyte binary that renames some files needs a 12K
stack. One FOR, one IF, no subroutines.

The 12K is CODE. ROM. Not very useful as stack space.
The RAM complement was 256 bytes. That would be your
stack and all your variables. E.g., the file name that
you read is at least ~8 bytes, probably rad into a buffer that
is MAX_PATH long (MAX_PATH, of course, being something like
256 bytes (oops! I guess no stack space left!)

Note that DIR$ and MID$ are function invocations -- "subroutines".


Naturally. DIR$ has to make an OS call.

Quote:
Also note that the "compiler" no doubt treats every variable reference
as a subroutine invocation -- as well as every operator and possibly
even constants.


I really doubt that. Compiled PB programs are typically faster than
equivalent compiled C; and I have run useful FOR loops that executed
at 100 MHz.

Actually, I could add labels to every line of code and use the CODEPTR
and VARPTR functions to get the address of each line of code and each
variable. So I could disassemble the compile. There is no intermediate
assembly stage to examine, like most c compilers have. The PB compiler
is written in PB and generates binary directly.


I.e., there are lots of CALLs happening even though
>you don't see them.

I doubt that, too.


--

John Larkin Highland Technology, Inc

lunatic fringe electronics

Don Y
Guest

Mon Sep 05, 2016 12:10 am   



On 9/4/2016 10:28 AM, John Larkin wrote:
Quote:
On Fri, 26 Aug 2016 12:31:21 -0700, Don Y
blockedofcourse_at_foo.invalid> wrote:
Did you have any idea how large
the program would be "on disk" -- let alone "in memory" -- BEFORE you
started to write it?

My PC has gigabytes of ram! The EXE file is 16 Kbytes, typical for a
small program. PB has a #BLOAT metacommand that will make EXE files
arbitrarily bigger if you want to impress people.

My first product ran in 12KB of code, 256 bytes of RAM.
The *stack* in your PB program wouldn't fit in that hardware!

I doubt that a 16 kbyte binary that renames some files needs a 12K
stack. One FOR, one IF, no subroutines.

The 12K is CODE. ROM. Not very useful as stack space.
The RAM complement was 256 bytes. That would be your
stack and all your variables. E.g., the file name that
you read is at least ~8 bytes, probably rad into a buffer that
is MAX_PATH long (MAX_PATH, of course, being something like
256 bytes (oops! I guess no stack space left!)

Note that DIR$ and MID$ are function invocations -- "subroutines".

Naturally. DIR$ has to make an OS call.


It's a library call on top of (one or more) OS call/trap(s).

Each call requires building a stack frame for the callee and
tracking the return address to the caller.

Quote:
Also note that the "compiler" no doubt treats every variable reference
as a subroutine invocation -- as well as every operator and possibly
even constants.

I really doubt that. Compiled PB programs are typically faster than
equivalent compiled C; and I have run useful FOR loops that executed
at 100 MHz.


Your variables are *strings*. Operations on strings are typically
implemented as functions/subroutines.

FUNCTION PBMAIN () AS LONG

DEFINT A-Z

Syntactic sugar.

COLOR 15, 1

Calls a subroutine that sets the color of the display (foreground and
background)

CLS

Calls a subroutine that clears the screen

FOR X = 1 TO 100

Creates a temporary variable and initializes it to 1. Later, allows
this point to be revisited (via a JUMP) and said variable incremented.

P$ = DIR$("99D*", 16) ' LOOK UP 99Dxxx TYPE FOLDERS

Another temporary variable to capture the (variable length!) result of
the DIR$ function invocation (which eventually traps to the OS). The
two arguments can be passed as literals, depending on the compiler.

IF P$ <> "" THEN

Call a function to check to see if the P$ string is not equal to the
"empty" string. A compiler *might* optimize this to invoke a different
function designed just to check for "not empty" (as it is a common case).
Depending on the string implementation, it could possibly even perform
this test inline.

Q$ = "Z" + MID$(P$, 4)

Yet another temporary variable to capture the (variable length) result
of concatenating the string constant "Z" with the (variable length) result
of the MID$ function invocation.

PRINT P$; " "; Q$

Call a function to pass the (variable) contents of the string variables
P$ and Q$ to the console ("reassurance") separated by a bit of whitespace.

SLEEP 50

Another function invocation that eventually hooks the OS (for timing
information)

NAME P$ AS Q$ ' RENAME TO Zxxx

Another library function invocation that eventually traps to the OS to
alter the name of the file on the medium.

END IF

Syntactic sugar

NEXT

Jump to loop head

INPUT "Hit any key...", A$

This calls a subroutine to print the (optional) string argument
and then wait for *characters* that it accumulates into A$. I
suspect it also allows some local editting (e.g., backspace to delete
the last key typed).

END FUNCTION

Syntactic sugar.

Quote:
Actually, I could add labels to every line of code and use the CODEPTR
and VARPTR functions to get the address of each line of code and each
variable. So I could disassemble the compile. There is no intermediate
assembly stage to examine, like most c compilers have. The PB compiler
is written in PB and generates binary directly.


No, you have no way of knowing if it is creating machine language
code or some intermediate code that it feeds to an interpreter or
virtual machine.

Note that you can "interpret" code at a variety of different levels
and trade space for time. Many JIT'ed languages do exactly this;
opting for an intermediate representation that isn't quite "assembly
language" but avoids much of the parsing costs associated with more
traditional interpretive techniques.

E.g., in the early 80's, we would write "Macro Packages" that gave
us our own "BASIC" dialect by letting the (macro) *assembler* handle
all the mess of parsing and conveying syntax errors to the developer;
yet generating terse "byte codes" that could be "interpreted" at
near-machine-language speeds.

The Limbo compiler, for example, creates "instructions" for the
Dis virtual machine (hardware independent) which run at almost
the same speed that a "native" implementation would execute.

Quote:
I.e., there are lots of CALLs happening even though
you don't see them.

I doubt that, too.


It would be silly for the compiler to inline all of those "keyword
invocations". Your code would quickly grow unreasonably large.
Every PRINT would require the corresponding portions of the "PRINT"
function to be inserted, inline -- instead of simply calling the
function and incurring the cost of the function call.

Wouldn't you think *C* would take the "don't rely on subroutines
and functions" approach if it could eek out any sort of performance
gain -- given that *it* is used far more often to implement *real*
systems?

[You *can* do this -- to a limited extent and only for those
functions/subroutines that you explicitly choose to "in-line".
Use the feature anything other than sparingly and you're
quickly chagrined at how big your code gets in short order!]

Michael A. Terrell
Guest

Sun Sep 11, 2016 7:30 am   



Don Y wrote:
Quote:
On 8/29/2016 3:02 PM, Michael A. Terrell wrote:
Don Y wrote:

From my viewpoint in this chair, ignoring all of the "applications"
potentially running in the PC besides me (and anything that I've
designed):
- the BT earpiece and "sender"
- my (handheld) camera
- the "controller" in the hard disk inside my PC
- the controller in the keyboard
- the controller in the mouse
- the controller in the optical drive
- the controller in the LCD monitor
- the printer at my feet
- the processor in the cordless phone -- and its base unit
- the processor(s) in the TV
- the DVD player
- the router in this desk drawer (I can't see the modem from here)
- the processor *inside* the furnace

Most of these don't HAVE "update paths" -- user/consumer's only option
is to discard and buy something else. Yet, when was the last time you
discarded a camera/keyboard/mouse/printer/phone/etc. because of a BUG?

A lot of Baracuda SATA hard drives were replaced because of a software
bug. They would work for a while, but after enough reboots, they
locked up.

A disk drive *should* have a firmware upgrade path. Whether the bug
effectively *blocked* that or not is a different issue.


It would lock up the computer. I had a 1TB drive for about four
months, when it started rebooting the computer. There is a serial TTL
port on the drive, and you can update the firmware, but in doing so,
there is a very high likelyhood of losing everything on the drive. I
found the commands to reset the controller, and a 2mm connector for the
serial interface. I have only recovered about 15% of the data, so I have
never flashed the firmware.


Quote:
OTOH, think of the timeframe in which you sought that "example".
Now, think of how many devices (that you personally own) suffered
HARDWARE failures in that same timeframe.


That was the only hardware failure, in several years.


Quote:
[I suspect a two week period doesn't pass without some friend or
colleague offering me some bit of kit that has "died" in the
preceding 14 days: "Yours if you want it" (i.e., if you want
to FIX it!)]

I.e., it's somehow "acceptable" that hardware can crap out (note
how many consumer devices carry *~90* day warranties!) and be
discarded. Yet, software (that will *never* "wear out")
is derided if it doesn't perform perfectly.


My favorite word processor ran on the Commodore 64. Speedscript had
features that I have seen in no other program.

Quote:

Would folks be as "tolerant" if they had to eat any bugs they
discovered AFTER that 90 day window?


Don Y
Guest

Sun Sep 11, 2016 2:14 pm   



On 9/10/2016 10:42 PM, Michael A. Terrell wrote:
Quote:
OTOH, think of the timeframe in which you sought that "example".
Now, think of how many devices (that you personally own) suffered
HARDWARE failures in that same timeframe.

That was the only hardware failure, in several years.


Really? I have monitors that crap out every few years, ditto
for TV's. Just had to repair two iPods that suffered from the
designers' failure to anticipate (and protect against) overcharging,
etc.

Quote:
[I suspect a two week period doesn't pass without some friend or
colleague offering me some bit of kit that has "died" in the
preceding 14 days: "Yours if you want it" (i.e., if you want
to FIX it!)]

I.e., it's somehow "acceptable" that hardware can crap out (note
how many consumer devices carry *~90* day warranties!) and be
discarded. Yet, software (that will *never* "wear out")
is derided if it doesn't perform perfectly.

My favorite word processor ran on the Commodore 64. Speedscript had
features that I have seen in no other program.


My first favorite (CP/M days) was Electric Blackboard. Most delightful
feature (for that timeframe) was the ability to specify "cursor direction".
So, you could set it to "down" and then type a vertical column of
"whatever" (handy for inserting a "comment character" in front of
several contiguous lines of text).

[This was in the days of serial terminals so having a screen oriented
editor was a HUGE win -- no waiting for line-at-a-time screen updates, etc.]

Or, type a line of asterisks (with direction set to "right").
Then, set direction to "down" and type MORE asterisks. Then "left"
for still more. Finally, "up" to complete the drawing of the "box".

In the early PC days, Brief became my favorite. It was speedy,
small, programmable, etc.

Now, I adapt to whatever is available (I work on several different
machines using several different operating systems, etc.). Often,
an application implements (or requires) a particular "editor"
so its easier to adapt than it would be to insist the tool adapt
to *you*!

Quote:
Would folks be as "tolerant" if they had to eat any bugs they
discovered AFTER that 90 day window?


Michael A. Terrell
Guest

Tue Sep 13, 2016 6:58 am   



Don Y wrote:
Quote:
On 9/10/2016 10:42 PM, Michael A. Terrell wrote:
OTOH, think of the timeframe in which you sought that "example".
Now, think of how many devices (that you personally own) suffered
HARDWARE failures in that same timeframe.

That was the only hardware failure, in several years.

Really? I have monitors that crap out every few years, ditto
for TV's. Just had to repair two iPods that suffered from the
designers' failure to anticipate (and protect against) overcharging,
etc.


I repair a lot of failed electronics, that I pick up as scrap from
thrift stores. I just have very little problems with my equipment. It
sees how much stuff gets scrapped, when it has too many problems. :)


Quote:
[I suspect a two week period doesn't pass without some friend or
colleague offering me some bit of kit that has "died" in the
preceding 14 days: "Yours if you want it" (i.e., if you want
to FIX it!)]

I.e., it's somehow "acceptable" that hardware can crap out (note
how many consumer devices carry *~90* day warranties!) and be
discarded. Yet, software (that will *never* "wear out")
is derided if it doesn't perform perfectly.

My favorite word processor ran on the Commodore 64. Speedscript had
features that I have seen in no other program.

My first favorite (CP/M days) was Electric Blackboard. Most delightful
feature (for that timeframe) was the ability to specify "cursor direction".
So, you could set it to "down" and then type a vertical column of
"whatever" (handy for inserting a "comment character" in front of
several contiguous lines of text).

[This was in the days of serial terminals so having a screen oriented
editor was a HUGE win -- no waiting for line-at-a-time screen updates,
etc.]

Or, type a line of asterisks (with direction set to "right").
Then, set direction to "down" and type MORE asterisks. Then "left"
for still more. Finally, "up" to complete the drawing of the "box".

In the early PC days, Brief became my favorite. It was speedy,
small, programmable, etc.

Now, I adapt to whatever is available (I work on several different
machines using several different operating systems, etc.). Often,
an application implements (or requires) a particular "editor"
so its easier to adapt than it would be to insist the tool adapt
to *you*!


Did you ever use a word processor to edit a second copy of itself? :)


--
Never piss off an Engineer!

They don't get mad.

They don't get even.

They go for over unity! Wink

Goto page Previous  1, 2, 3 ... 16, 17, 18, 19  Next

elektroda.net NewsGroups Forum Index - Electronics Design - Windows rename

Ask a question - edaboard.com

Arabic versionBulgarian versionCatalan versionCzech versionDanish versionGerman versionGreek versionEnglish versionSpanish versionFinnish versionFrench versionHindi versionCroatian versionIndonesian versionItalian versionHebrew versionJapanese versionKorean versionLithuanian versionLatvian versionDutch versionNorwegian versionPolish versionPortuguese versionRomanian versionRussian versionSlovak versionSlovenian versionSerbian versionSwedish versionTagalog versionUkrainian versionVietnamese versionChinese version
RTV map EDAboard.com map News map EDAboard.eu map EDAboard.de map EDAboard.co.uk map