What is the problem with this ?

On Nov 26, 7:27 am, JosephKK <quiettechb...@yahoo.com> wrote:
On Mon, 24 Nov 2008 11:06:22 -0800 (PST), junee <azeez...@gmail.com
wrote:



On Nov 24, 1:44 pm, paas <pabloalvarezsanc...@gmail.com> wrote:
can I go with this technique
11111 + 111111 = 1111111111

Thanks.

Interesting idea, but it is a falacy in fact. Not having a '1' is can
be represented as having '0'. What  you are doing is replacing the '0'
by a blanck space but it the concept of '0' is still there.

where did I said I'll use '0's

lets see another example

6/2=3, instead of that

111111 / 111 = 11

got it..?

Thanks.
what about other ppl ?
any ideas from u

How do you indicate the end of a number?
by comparing with a ' Big number '

Got it.

any doubts here?
 
John Doe wrote:
John Fields <jfields@austininstruments.com> wrote:

We stopped using stones to count with a long time ago, ;)

Binary stones.
well, no, Unary stones.

The unary numeral system is the bijective base-1 numeral system.
:)

Don...


--
Don McKenzie

Site Map: http://www.dontronics.com/sitemap
E-Mail Contact Page: http://www.dontronics.com/email

http://www.dontronics-shop.com/super4-usb-relay-module.html
http://www.wizard-from-oz.com 1000's of electronic items
 
junee wrote:

Thanks for the explanation and clarifications.
Even morse code is a clocked binary text system.

A unary base 1 system is very impractical, for a multitude of reasons as
described by the previous posters.

Makes my eyes spin (inset rolly eye icon here)

Don...


--
Don McKenzie

Site Map: http://www.dontronics.com/sitemap
E-Mail Contact Page: http://www.dontronics.com/email

http://www.dontronics-shop.com/super4-usb-relay-module.html
http://www.wizard-from-oz.com 1000's of electronic items
 
Well, the logic would be interesting at least, even if utterly impractical.
So just use it for smaller numbers, NBD. Represent larger numbers with
different forms, like digits or powers -- that's how bigger numbers always
end up represented anyway (in order of scale: digits, matissa with exponent,
exponents of exponents, raising operator). That's just implementation, and
as long as you've cooked up a Turing-complete computer, it doesn't really
matter in the end.

I think the biggest impracticality that people think is, well how the hell
do you represent 1000? With a thousand wires? That's absurd of course, and
besides, it demands that there be a zero (7 = ...00000001111111), and that
isn't necessary. (There is the necessity of null as the absence of one,
however. In that sense it is still binary.) Indeed, a number can be
represented on just one wire, and electronic rules require that to be
something like voltage or current or time. (Frequency, phase, impedance
(complex!) and more properties can be sent down wires, but I can't imagine
how complex an impedance-signaled computer would be to construct!) Time
would be easy because, digitally, it could be ticked off in discrete
quantities with a clock (a bus could be self-clocked, so only data and
ground wires are needed, which would look something like a serial line, but
self clocked). Notice that a counter is nothing more than an
unary-to-binary conveter. ORing together the edges (XORing?) of two data
streams and clocking a counter forms an adder, or using an up/down counter,
data streams to count-up and count-down forms a subtractor. By clocking in
one number and adding it to itself (the addition could be done in binary
just for expediency...) with each clock of a second stream, a rate
multiplier is formed. If two data streams start simultaneously, it is clear
to see which number is larger: the lesser string stops first. If they don't
start simultaneously, you can count the number of cycles before the next
start and adjust for it, which just goes back to the subtractor, which is
fundamentally what a comparison is, after all. Representations of real
numbers can be produced with arbitrary timings, though moving to the analog
domain, uncertainty becomes the limit to accuracy.

Alternately, as HardySpicer observed, voltage could be used. The physical
phenomenon of voltage can even be seen as unary, which is kind of fun.
Noise of course puts a limit on accuracy. Interfacing to regular digital
circuitry is similar, using an ADC/DAC instead of a counter. There is one
advantage to analog: exponents are easy to compute, thanks to exponential
function blocks being all around us: the silicon junction.

Tim

--
Deep Friar: a very philosophical monk.
Website: http://webpages.charter.net/dawill/tmoranwms

"junee" <azeez541@gmail.com> wrote in message
news:25a82af6-9be8-4126-939c-4c1b67e661a6@o40g2000prn.googlegroups.com...
processors
and me too, they are great, but I'm bored with them.

I'm interested in designing the circuitry,with out the concept
of digital,
I mean there are only 1's and no 0's

The main part ALU of cpu is responsible
for all arithmetic calculations.

If I want to design the ALU from scratch with
out the concept of binary, instead only with single
signal,
what are the difficulties one has to face.?
for example see this
Eg: 5+5=10, instead of that
can I go with this technique
11111 + 111111 = 1111111111


Thanks.
 
On Nov 26, 10:00 pm, "Tim Williams" <tmoran...@charter.net> wrote:
Well, the logic would be interesting at least, even if utterly impractical.
So just use it for smaller numbers, NBD.  Represent larger numbers with
different forms, like digits or powers -- that's how bigger numbers always
end up represented anyway (in order of scale: digits, matissa with exponent,
exponents of exponents, raising operator).  That's just implementation, and
as long as you've cooked up a Turing-complete computer, it doesn't really
matter in the end.

I think the biggest impracticality that people think is, well how the hell
do you represent 1000?  With a thousand wires?  That's absurd of course, and
besides, it demands that there be a zero (7 = ...00000001111111), and that
isn't necessary.  (There is the necessity of null as the absence of one,
however.  In that sense it is still binary.)  Indeed, a number can be
represented on just one wire, and electronic rules require that to be
something like voltage or current or time.  (Frequency, phase, impedance
(complex!) and more properties can be sent down wires, but I can't imagine
how complex an impedance-signaled computer would be to construct!)  Time
would be easy because, digitally, it could be ticked off in discrete
quantities with a clock (a bus could be self-clocked, so only data and
ground wires are needed, which would look something like a serial line, but
self clocked).  Notice that a counter is nothing more than an
unary-to-binary conveter.  ORing together the edges (XORing?) of two data
streams and clocking a counter forms an adder, or using an up/down counter,
data streams to count-up and count-down forms a subtractor.  By clocking in
one number and adding it to itself (the addition could be done in binary
just for expediency...) with each clock of a second stream, a rate
multiplier is formed.  If two data streams start simultaneously, it is clear
to see which number is larger: the lesser string stops first.  If they don't
start simultaneously, you can count the number of cycles before the next
start and adjust for it, which just goes back to the subtractor, which is
fundamentally what a comparison is, after all.  Representations of real
numbers can be produced with arbitrary timings, though moving to the analog
domain, uncertainty becomes the limit to accuracy.

Alternately, as HardySpicer observed, voltage could be used.  The physical
phenomenon of voltage can even be seen as unary, which is kind of fun.
Noise of course puts a limit on accuracy.  Interfacing to regular digital
circuitry is similar, using an ADC/DAC instead of a counter.  There is one
advantage to analog: exponents are easy to compute, thanks to exponential
function blocks being all around us: the silicon junction.

Tim

--
Deep Friar: a very philosophical monk.
Website:http://webpages.charter.net/dawill/tmoranwms

"junee" <azeez...@gmail.com> wrote in message

news:25a82af6-9be8-4126-939c-4c1b67e661a6@o40g2000prn.googlegroups.com...

These days people are mostly using binary[digital] processors
and me too, they are great, but I'm bored with them.

I'm interested in designing the circuitry,with out the concept
of digital,
I mean there are only 1's and no 0's

The main part ALU of cpu is responsible
for all arithmetic calculations.

If I want to design the ALU from scratch with
out the concept of binary, instead only with single
signal,
what are the difficulties one has to face.?
for example see this
Eg: 5+5=10, instead of that
can I go with this technique
11111 + 111111 = 1111111111

Thanks.
oh thanks for the explanation...
 
On Wed, 26 Nov 2008 05:49:41 -0800 (PST), junee <azeez541@gmail.com>
wrote:

On Nov 26, 7:27 am, JosephKK <quiettechb...@yahoo.com> wrote:
On Mon, 24 Nov 2008 11:06:22 -0800 (PST), junee <azeez...@gmail.com
wrote:



On Nov 24, 1:44 pm, paas <pabloalvarezsanc...@gmail.com> wrote:
can I go with this technique
11111 + 111111 = 1111111111

Thanks.

Interesting idea, but it is a falacy in fact. Not having a '1' is can
be represented as having '0'. What  you are doing is replacing the '0'
by a blanck space but it the concept of '0' is still there.

where did I said I'll use '0's

lets see another example

6/2=3, instead of that

111111 / 111 = 11

got it..?

Thanks.
what about other ppl ?
any ideas from u

How do you indicate the end of a number?

by comparing with a ' Big number '

Got it.

any doubts here?
No. It does not make any sense. How long does the compare take?
Answer the question (well, now two questions).

Can you say very many doubts?
 
On Tue, 25 Nov 2008 20:08:21 -0800, John Larkin
<jjlarkin@highNOTlandTHIStechnologyPART.com> wrote:

On Tue, 25 Nov 2008 19:51:36 -0800, JosephKK <quiettechblue@yahoo.com
wrote:

On Mon, 24 Nov 2008 15:27:56 -0800, "Joel Koltner"
zapwireDASHgroups@yahoo.com> wrote:

"Vladimir Vassilevsky" <antispam_bogus@hotmail.com> wrote in message
news:6uGWk.4982$8_3.1230@flpi147.ffdc.sbc.com...
BTW, I have heard many times of the idea of asynchronous CPU. So the results
of operations are not synchronized to a clock, but propagate further at the
natural speed. The synchronization is done by delay matching at the critical
points. Ideally, that should work faster then the clocked logic; perhaps the
variance of the delays kills the idea.

Intel CPUs use some "chunks" of asynchronous logic for, e.g., instruction
decoders, but what I've heard the presenters of papers on this topic stress is
that their goal is usually power reduction much moreso than speed.

It seems that there should be a textbook of asynchronous logic design out
there by now... besides going over the usual discussion of how you avoid race
conditions with your min terms/max terms, it'd also discuss the various clever
schemes people have come up with to do handshaking between multiple
asynchronous modules, perhaps discuss various historical results (like the
Hennessy & Patterson book does... when I took a class using it in college
years ago, the professor was pretty darned good so typically there "meat" of
H&P was just review anyway -- and it's not like the math was hard --, but I
always looked forward to their end--of-chapter "real life examples"
discussions), etc.

---Joel


There is and there isn't. It is presumed covered in the combinatorial
logic and sequential logic courses. But, of course it isn't really
covered. Most state machine courses are trash as well.


ME: Use a transparent latch.

Xilinx Software: WARNING -- You are using a transparent latch!

John
Is there even a proper definition for that?
 
On Tue, 25 Nov 2008 06:24:23 -0800 (PST), Tim Shoppa
<shoppa@trailing-edge.com> wrote:

On Nov 24, 5:54 pm, John Larkin
jjlar...@highNOTlandTHIStechnologyPART.com> wrote:
IBM also, for some strange reason, used "star code" in parts of some
machines, like the 1401, where each decimal digit 0..9 was represented
by two bits set out of five. There are, I think, exactly 10 such
codes.

If your logic is based on decimal digits and 2-input AND gates are
very economical it works out very nicely. I didn't know what it was
called or that it was in the 1401.

Another common representation that is easily decoded is what you get
if you build a 5-bit-wide twisted ring counter. This is very widely
used for decoded decimal counters (e.g. the CD4017 has this design
internally), and Spehro even posted a circuit, a few years back in one
of my threads, where the decoders are not even AND gates but are in
fact the output LED's. Economically brilliant.

Tim.
That is nice, just be sure you initialize your twisted ring counter
properly.
 
On Tue, 25 Nov 2008 13:13:04 -0600, krw <krw@att.zzzzzzzzz> wrote:

In article <9phoi4p4tha9ccj68vf29vuin6d8687avn@4ax.com>,
jjlarkin@highNOTlandTHIStechnologyPART.com says...
On Tue, 25 Nov 2008 12:11:35 -0600, krw <krw@att.zzzzzzzzz> wrote:

In article <kkboi4psvfqie2fg8iku4m2kdoohshtp7q@4ax.com>,
jjlarkin@highNOTlandTHIStechnologyPART.com says...
On Tue, 25 Nov 2008 10:49:51 -0600, krw <krw@att.zzzzzzzzz> wrote:

In article <bl9oi4l66e56vtmkc060rme722r71cl5h7@4ax.com>,
jjlarkin@highNOTlandTHIStechnologyPART.com says...
On Tue, 25 Nov 2008 06:24:23 -0800 (PST), Tim Shoppa
shoppa@trailing-edge.com> wrote:

On Nov 24, 5:54 pm, John Larkin
jjlar...@highNOTlandTHIStechnologyPART.com> wrote:
IBM also, for some strange reason, used "star code" in parts of some
machines, like the 1401, where each decimal digit 0..9 was represented
by two bits set out of five. There are, I think, exactly 10 such
codes.

If your logic is based on decimal digits and 2-input AND gates are
very economical it works out very nicely. I didn't know what it was
called or that it was in the 1401.


The ALU section of the 1401 was called "main star", and as you suggest
probably did its math in star mode. Stuff outside was mostly BCD or
character codes. The beast directly executed character strings out of
core, so "assembly programming" was really machine code programming.
You could type these programs directly into core via the Selectric.

For small values of "character string" and "assembly programming",
perhaps. Wasn't the Selectric well after the 1401?

The 1620 could add in whatever format floated your boat. It was
known as the "Cadet" because it Couldn't Add and Didn't Even Try.
Addition was done in a lookup table. ;-)

I designed a SAR ADC, using transistors, and interfaced it to a 1401.
I hasten to point out that the machine was already an antique when I
did it. We hacked it in to the logic that was supposed to interface
the realtime clock, which they didn't have. The 1401 RTC was actually
a clockwork mechanism that drove switch contacts.

I know several who did similar things on 1130s. They were often
used for "instrumentation".


Those were binary machines, no? 16 bits, maybe.

Yes, 1130s were binary. They were popular for that sort of
"project" though.

I suppose google knows all this stuff.

I recall that the 1130 was the first "personal" computer, and maybe
the first computer that saw serious use in realtime process control.

IBM sure had a lot of very strange machines, up until the 360 line
lent a bit of coherence. But IBM sort of lost interest in process
control.

They didn't really lose interest as much as DEC cleaned their
clock, until the S/38 (and then 43XX) and then DEC lost their way.
IBM had the System-7 and Series-1, though they were a mess compared
to the DEC offerings.
What distorted timeline did that come from? DEC self destructed from
the internal conflict of VAX versus Alpha.
 
On Tue, 25 Nov 2008 06:08:21 -0800 (PST), Tim Shoppa
<shoppa@trailing-edge.com> wrote:

On Nov 24, 1:32 pm, junee <azeez...@gmail.com> wrote:
These days people are mostly using binary[digital] processors
and me too, they are great, but I'm bored with them.

I'm interested in designing the circuitry,with out the concept
of digital,
I mean there are only 1's and no 0's

The main part ALU of cpu is responsible
for all arithmetic calculations.

If I want to design the ALU from scratch with
out the concept of binary, instead only with single
signal,
what are the difficulties one has to face.?
for example see this
Eg: 5+5=10, instead of that
can I go with this technique
11111 + 111111 = 1111111111

It's called "the unary number system". Superseded by binary in 1937 or
so when a guy built the first binary adder in his kitchen. (I'm not
making this up!)

Unary/tally systems are used in various logic and encoding systems
where you know that the largest number will be something very small
and reasonable. Variations on it, especially in the way binary numbers
get decoded from it, make up things like "priority encoders" (which IS
a variation of the simple unary system). It's not very economical for
numbers bigger than your example.

Tim.
What was that faint rumble?
It was Tim flying miles above and away from junee's general area.
 
On 24 Nov., 19:32, junee <azeez...@gmail.com> wrote:
processors
and me too, they are great, but I'm bored with them.

I'm interested in designing the circuitry,with out the concept
of digital,
I mean there are only 1's and no 0's

The main part ALU of cpu is responsible
for all arithmetic calculations.

If I want to design the ALU from scratch with
out the concept of binary, instead only with single
signal,
what are the difficulties one has to face.?
for example see this
Eg: 5+5=10, instead of that
can I go with this technique
11111 + 111111 = 1111111111

Thanks.
As a Kid i invented my multiplier:

The multiplicand was accumulated multiplier times.
Numbers got larger and i discovered more effective ways.
 
On Mon, 24 Nov 2008 10:32:04 -0800 (PST), junee <azeez541@gmail.com>
wrote:

processors
and me too, they are great, but I'm bored with them.

I'm interested in designing the circuitry,with out the concept
of digital,
I mean there are only 1's and no 0's

The main part ALU of cpu is responsible
for all arithmetic calculations.

If I want to design the ALU from scratch with
out the concept of binary, instead only with single
signal,
what are the difficulties one has to face.?
for example see this
Eg: 5+5=10, instead of that
can I go with this technique
11111 + 111111 = 1111111111
This isn't quite what you're talking about, but an interesting
real-life use of this sort of thing is a flash A/D converter (not to
be confused with the unrelated and much more recent FLASH memory),
which uses a large number of comparators (to convert N bits takes 2^N
comparators). The positive comparator inputs all go to the signal
input, and the negative inputs go to successive taps on a resistor
string with a DC voltage across it. As the input voltage increases,
more and more comparators turn on as the voltages goes over that at
its tape in the resistor string.

This gives a number of 1 outputs proportional to the input voltage.
These can then go into a logic block called a "thermometer encoder"
which converts the highest '1' bit into its corresponding binary
value.

 
On Thu, 27 Nov 2008 02:53:17 +1100, Don McKenzie <5V@2.5A> wrote:

John Doe wrote:
John Fields <jfields@austininstruments.com> wrote:

We stopped using stones to count with a long time ago, ;)

Binary stones.

well, no, Unary stones.

The unary numeral system is the bijective base-1 numeral system.
:)

Don...
Bijective? I hope that means both injective and surjective (one to
one and onto). Not that this guess is any clearer for many people
here.
 
On Nov 27, 7:20 am, JosephKK <quiettechb...@yahoo.com> wrote:
On Tue, 25 Nov 2008 06:24:23 -0800 (PST), Tim Shoppa
Another common representation that is easily decoded is what you get
if you build a 5-bit-wide twisted ring counter. This is very widely
used for decoded decimal counters (e.g. the CD4017 has this design
internally), and Spehro even posted a circuit, a few years back in one
of my threads, where the decoders are not even AND gates but are in
fact the output LED's. Economically brilliant.

Tim.

That is nice, just be sure you initialize your twisted ring counter
properly.
The good circuits (e.g. Spehro's or the CD4017) self-correct if they
get into a disallowed sequence.

Tim.
 
On Nov 27, 7:28 am, JosephKK <quiettechb...@yahoo.com> wrote:
On Tue, 25 Nov 2008 13:13:04 -0600, krw <k...@att.zzzzzzzzz> wrote:
In article <9phoi4p4tha9ccj68vf29vuin6d8687...@4ax.com>,
jjlar...@highNOTlandTHIStechnologyPART.com says...
On Tue, 25 Nov 2008 12:11:35 -0600, krw <k...@att.zzzzzzzzz> wrote:

In article <kkboi4psvfqie2fg8iku4m2kdoohsht...@4ax.com>,
jjlar...@highNOTlandTHIStechnologyPART.com says...
On Tue, 25 Nov 2008 10:49:51 -0600, krw <k...@att.zzzzzzzzz> wrote:

In article <bl9oi4l66e56vtmkc060rme722r71cl...@4ax.com>,
jjlar...@highNOTlandTHIStechnologyPART.com says...
On Tue, 25 Nov 2008 06:24:23 -0800 (PST), Tim Shoppa
sho...@trailing-edge.com> wrote:

On Nov 24, 5:54 pm, John Larkin
jjlar...@highNOTlandTHIStechnologyPART.com> wrote:
IBM also, for some strange reason, used "star code" in parts of some
machines, like the 1401, where each decimal digit 0..9 was represented
by two bits set out of five. There are, I think, exactly 10 such
codes.

If your logic is based on decimal digits and 2-input AND gates are
very economical it works out very nicely. I didn't know what it was
called or that it was in the 1401.

The ALU section of the 1401 was called "main star", and as you suggest
probably did its math in star mode. Stuff outside was mostly BCD or
character codes. The beast directly executed character strings out of
core, so "assembly programming" was really machine code programming.
You could type these programs directly into core via the Selectric.

For small values of "character string" and "assembly programming",
perhaps.  Wasn't the Selectric well after the 1401?

The 1620 could add in whatever format floated your boat. It was
known as the "Cadet" because it Couldn't Add and Didn't Even Try.  
Addition was done in a lookup table.  ;-)

I designed a SAR ADC, using transistors, and interfaced it to a 1401.
I hasten to point out that the machine was already an antique when I
did it. We hacked it in to the logic that was supposed to interface
the realtime clock, which they didn't have. The 1401 RTC was actually
a clockwork mechanism that drove switch contacts.

I know several who did similar things on 1130s.  They were often
used for "instrumentation".

Those were binary machines, no? 16 bits, maybe.

Yes, 1130s were binary.  They were popular for that sort of
"project" though.

I suppose google knows all this stuff.

I recall that the 1130 was the first "personal" computer, and maybe
the first computer that saw serious use in realtime process control.

IBM sure had a lot of very strange machines, up until the 360 line
lent a bit of coherence. But IBM sort of lost interest in process
control.

They didn't really lose interest as much as DEC cleaned their
clock, until the S/38 (and then 43XX) and then DEC lost their way.  
IBM had the System-7 and Series-1, though they were a mess compared
to the DEC offerings.  

What distorted timeline did that come from?  DEC self destructed from
the internal conflict of VAX versus Alpha.
That might be the view of somebody internally, but IMHO as an external
DEC user/customer, the problem began when they started trying to
compete with IBM in the mini-mainframe market. DEC did great with
small lab machines up through the 1980's and early 90's, even after
their marketing division completely began ignoring the market in the
late 80's in their push to sell only mainframe-class machines.

Don't get me wrong, VMS's clustering technology was and still is way
cool, and has many uses outside the mainframe-class world.

Tim.
 
In article <4ripi4l5fq728f9011i1p8d1r79a549m4h@4ax.com>,
jjlarkin@highNOTlandTHIStechnologyPART.com says...>
On Tue, 25 Nov 2008 19:51:36 -0800, JosephKK <quiettechblue@yahoo.com
wrote:

On Mon, 24 Nov 2008 15:27:56 -0800, "Joel Koltner"
zapwireDASHgroups@yahoo.com> wrote:

"Vladimir Vassilevsky" <antispam_bogus@hotmail.com> wrote in message
news:6uGWk.4982$8_3.1230@flpi147.ffdc.sbc.com...
BTW, I have heard many times of the idea of asynchronous CPU. So the results
of operations are not synchronized to a clock, but propagate further at the
natural speed. The synchronization is done by delay matching at the critical
points. Ideally, that should work faster then the clocked logic; perhaps the
variance of the delays kills the idea.

Intel CPUs use some "chunks" of asynchronous logic for, e.g., instruction
decoders, but what I've heard the presenters of papers on this topic stress is
that their goal is usually power reduction much moreso than speed.

It seems that there should be a textbook of asynchronous logic design out
there by now... besides going over the usual discussion of how you avoid race
conditions with your min terms/max terms, it'd also discuss the various clever
schemes people have come up with to do handshaking between multiple
asynchronous modules, perhaps discuss various historical results (like the
Hennessy & Patterson book does... when I took a class using it in college
years ago, the professor was pretty darned good so typically there "meat" of
H&P was just review anyway -- and it's not like the math was hard --, but I
always looked forward to their end--of-chapter "real life examples"
discussions), etc.

---Joel


There is and there isn't. It is presumed covered in the combinatorial
logic and sequential logic courses. But, of course it isn't really
covered. Most state machine courses are trash as well.


ME: Use a transparent latch.
Aren't all latches transparent?

Xilinx Software: WARNING -- You are using a transparent latch!
Xilinx' rules/technoloy/software hates latches. Other's have
nothing but. When in Rome, do as the groundrules writers do.
 
In article <cf4ti4le55t9j4cga9bhgbr8kl4kg4iubi@4ax.com>,
quiettechblue@yahoo.com says...>
On Tue, 25 Nov 2008 13:13:04 -0600, krw <krw@att.zzzzzzzzz> wrote:

In article <9phoi4p4tha9ccj68vf29vuin6d8687avn@4ax.com>,
jjlarkin@highNOTlandTHIStechnologyPART.com says...
On Tue, 25 Nov 2008 12:11:35 -0600, krw <krw@att.zzzzzzzzz> wrote:

In article <kkboi4psvfqie2fg8iku4m2kdoohshtp7q@4ax.com>,
jjlarkin@highNOTlandTHIStechnologyPART.com says...
On Tue, 25 Nov 2008 10:49:51 -0600, krw <krw@att.zzzzzzzzz> wrote:

In article <bl9oi4l66e56vtmkc060rme722r71cl5h7@4ax.com>,
jjlarkin@highNOTlandTHIStechnologyPART.com says...
On Tue, 25 Nov 2008 06:24:23 -0800 (PST), Tim Shoppa
shoppa@trailing-edge.com> wrote:

On Nov 24, 5:54 pm, John Larkin
jjlar...@highNOTlandTHIStechnologyPART.com> wrote:
IBM also, for some strange reason, used "star code" in parts of some
machines, like the 1401, where each decimal digit 0..9 was represented
by two bits set out of five. There are, I think, exactly 10 such
codes.

If your logic is based on decimal digits and 2-input AND gates are
very economical it works out very nicely. I didn't know what it was
called or that it was in the 1401.


The ALU section of the 1401 was called "main star", and as you suggest
probably did its math in star mode. Stuff outside was mostly BCD or
character codes. The beast directly executed character strings out of
core, so "assembly programming" was really machine code programming.
You could type these programs directly into core via the Selectric.

For small values of "character string" and "assembly programming",
perhaps. Wasn't the Selectric well after the 1401?

The 1620 could add in whatever format floated your boat. It was
known as the "Cadet" because it Couldn't Add and Didn't Even Try.
Addition was done in a lookup table. ;-)

I designed a SAR ADC, using transistors, and interfaced it to a 1401.
I hasten to point out that the machine was already an antique when I
did it. We hacked it in to the logic that was supposed to interface
the realtime clock, which they didn't have. The 1401 RTC was actually
a clockwork mechanism that drove switch contacts.

I know several who did similar things on 1130s. They were often
used for "instrumentation".


Those were binary machines, no? 16 bits, maybe.

Yes, 1130s were binary. They were popular for that sort of
"project" though.

I suppose google knows all this stuff.

I recall that the 1130 was the first "personal" computer, and maybe
the first computer that saw serious use in realtime process control.

IBM sure had a lot of very strange machines, up until the 360 line
lent a bit of coherence. But IBM sort of lost interest in process
control.

They didn't really lose interest as much as DEC cleaned their
clock, until the S/38 (and then 43XX) and then DEC lost their way.
IBM had the System-7 and Series-1, though they were a mess compared
to the DEC offerings.

What distorted timeline did that come from? DEC self destructed from
the internal conflict of VAX versus Alpha.
They were on the skids well before Alpha. In fact, Alpha was their
only possible salvation. One corner, as told by the insiders, was
when the suits demanded they be called "d*i*g*i*t*a*l", rather than
"DEC".

As always, you have your head firmly up your ass.

--
Keith
 
On Mon, 1 Dec 2008 12:37:56 -0600, krw <krw@att.zzzzzzzzz> wrote:

In article <4ripi4l5fq728f9011i1p8d1r79a549m4h@4ax.com>,
jjlarkin@highNOTlandTHIStechnologyPART.com says...
On Tue, 25 Nov 2008 19:51:36 -0800, JosephKK <quiettechblue@yahoo.com
wrote:

On Mon, 24 Nov 2008 15:27:56 -0800, "Joel Koltner"
zapwireDASHgroups@yahoo.com> wrote:

"Vladimir Vassilevsky" <antispam_bogus@hotmail.com> wrote in message
news:6uGWk.4982$8_3.1230@flpi147.ffdc.sbc.com...
BTW, I have heard many times of the idea of asynchronous CPU. So the results
of operations are not synchronized to a clock, but propagate further at the
natural speed. The synchronization is done by delay matching at the critical
points. Ideally, that should work faster then the clocked logic; perhaps the
variance of the delays kills the idea.

Intel CPUs use some "chunks" of asynchronous logic for, e.g., instruction
decoders, but what I've heard the presenters of papers on this topic stress is
that their goal is usually power reduction much moreso than speed.

It seems that there should be a textbook of asynchronous logic design out
there by now... besides going over the usual discussion of how you avoid race
conditions with your min terms/max terms, it'd also discuss the various clever
schemes people have come up with to do handshaking between multiple
asynchronous modules, perhaps discuss various historical results (like the
Hennessy & Patterson book does... when I took a class using it in college
years ago, the professor was pretty darned good so typically there "meat" of
H&P was just review anyway -- and it's not like the math was hard --, but I
always looked forward to their end--of-chapter "real life examples"
discussions), etc.

---Joel


There is and there isn't. It is presumed covered in the combinatorial
logic and sequential logic courses. But, of course it isn't really
covered. Most state machine courses are trash as well.


ME: Use a transparent latch.

Aren't all latches transparent?
No. Some are edge-triggered, such as 74HC74.

Xilinx Software: WARNING -- You are using a transparent latch!

Xilinx' rules/technoloy/software hates latches. Other's have
nothing but. When in Rome, do as the groundrules writers do.
...Jim Thompson
--
| James E.Thompson, P.E. | mens |
| Analog Innovations, Inc. | et |
| Analog/Mixed-Signal ASIC's and Discrete Systems | manus |
| Phoenix, Arizona 85048 Skype: Contacts Only | |
| Voice:(480)460-2350 Fax: Available upon request | Brass Rat |
| E-mail Icon at http://www.analog-innovations.com | 1962 |

I love to cook with wine Sometimes I even put it in the food
 
In article <d305e11f-2db0-420b-a8a8-
f570e90abc4f@k8g2000yqn.googlegroups.com>, info2@rayed.de says...>
On 24 Nov., 19:32, junee <azeez...@gmail.com> wrote:
These days people are mostly using binary[digital] processors
and me too, they are great, but I'm bored with them.

I'm interested in designing the circuitry,with out the concept
of digital,
I mean there are only 1's and no 0's

The main part ALU of cpu is responsible
for all arithmetic calculations.

If I want to design the ALU from scratch with
out the concept of binary, instead only with single
signal,
what are the difficulties one has to face.?
for example see this
Eg: 5+5=10, instead of that
can I go with this technique
11111 + 111111 = 1111111111

Thanks.

As a Kid i invented my multiplier:

The multiplicand was accumulated multiplier times.
I went through the multiplier => adder tree phase as a kid.

Numbers got larger and i discovered more effective ways.
That's when I discovered that counters weren't the only use for
sequential logic. ;-)

--
Keith
 
In article <k7c8j492pfa8ug98oa2cbrcg228jcgj0ae@4ax.com>, To-Email-
Use-The-Envelope-Icon@My-Web-Site.com says...>
On Mon, 1 Dec 2008 12:37:56 -0600, krw <krw@att.zzzzzzzzz> wrote:

In article <4ripi4l5fq728f9011i1p8d1r79a549m4h@4ax.com>,
jjlarkin@highNOTlandTHIStechnologyPART.com says...
On Tue, 25 Nov 2008 19:51:36 -0800, JosephKK <quiettechblue@yahoo.com
wrote:

On Mon, 24 Nov 2008 15:27:56 -0800, "Joel Koltner"
zapwireDASHgroups@yahoo.com> wrote:

"Vladimir Vassilevsky" <antispam_bogus@hotmail.com> wrote in message
news:6uGWk.4982$8_3.1230@flpi147.ffdc.sbc.com...
BTW, I have heard many times of the idea of asynchronous CPU. So the results
of operations are not synchronized to a clock, but propagate further at the
natural speed. The synchronization is done by delay matching at the critical
points. Ideally, that should work faster then the clocked logic; perhaps the
variance of the delays kills the idea.

Intel CPUs use some "chunks" of asynchronous logic for, e.g., instruction
decoders, but what I've heard the presenters of papers on this topic stress is
that their goal is usually power reduction much moreso than speed.

It seems that there should be a textbook of asynchronous logic design out
there by now... besides going over the usual discussion of how you avoid race
conditions with your min terms/max terms, it'd also discuss the various clever
schemes people have come up with to do handshaking between multiple
asynchronous modules, perhaps discuss various historical results (like the
Hennessy & Patterson book does... when I took a class using it in college
years ago, the professor was pretty darned good so typically there "meat" of
H&P was just review anyway -- and it's not like the math was hard --, but I
always looked forward to their end--of-chapter "real life examples"
discussions), etc.

---Joel


There is and there isn't. It is presumed covered in the combinatorial
logic and sequential logic courses. But, of course it isn't really
covered. Most state machine courses are trash as well.


ME: Use a transparent latch.

Aren't all latches transparent?

No. Some are edge-triggered, such as 74HC74.
That's a D-Type master-slave flip-flop, no matter what the mustard
bible says.

Xilinx Software: WARNING -- You are using a transparent latch!

Xilinx' rules/technoloy/software hates latches. Other's have
nothing but. When in Rome, do as the groundrules writers do.

...Jim Thompson
 

Welcome to EDABoard.com

Sponsor

Back
Top