What is the problem with this ?

On Nov 25, 7:32 am, junee <azeez...@gmail.com> wrote:
processors
and me too, they are great, but I'm bored with them.

I'm interested in designing the circuitry,with out the concept
of digital,
I mean there are only 1's and no 0's

The main part ALU of cpu is responsible
for all arithmetic calculations.

If I want to design the ALU from scratch with
out the concept of binary, instead only with single
signal,
what are the difficulties one has to face.?
for example see this
Eg: 5+5=10, instead of that
can I go with this technique
11111 + 111111 = 1111111111

Thanks.
This is a form of code and it is very in-efficient indeed! Don't go
there! The beauty of any number system is that it has weighted digits.
This means larger numbers don't need huge numbers of digits.
You could try base 4 perhaps if you could figure out the electronics.


Hardy
 
On Nov 24, 8:02 pm, SloppyChoppy (Chun...@zongazonga.zip) wrote:
processors
and me too, they are great, but I'm bored with them.
I'm interested in designing the circuitry,with out the concept
of digital,
I mean there are only 1's and no 0's
The main part ALU of cpu is responsible
for all arithmetic calculations.
If I want to design the ALU from scratch with
out the concept of binary, instead only with single
signal,
what are the difficulties one has to face.?
for example see this
Eg: 5+5=10, instead of that
can I go with this technique
11111 + 111111 = 1111111111

1. For communication issues, how would you clock your data out.  Most
communication with the outside world happens by timing or transition.
meaning: the duration of time that a signal is at a particular level or
transition of 0 to 1 or 1 to 0.  Or even internally, how would you shift
your data of all 1's thru the ALU?

2. By only using all Logic 1 bits, you've dramatically increased the needed
bits to represent numbers.  Your math stated 11111 + 11111 = 111111111... Compare
that to the common binary  0101 + 0101 = 1010.  Your answer is more than twice
the number of bits to represent the same answer that binary puts in 4 bits.

3. By your math example you are using 10 bits to represent 10 values where
as those same 10bits in binary would be 1024 values (zero inclusive).  Imagine
the problem handling a simple multplication of 10 * 10 = 100.  Your example would
need 100 bits.  Whereas binary only needs 7 bits to represent 100 = 1100100.

4. What would be the data borders... I mean in Binary, we have bit(1), nibble(4),
byte(8), word(16), Dword(32) and 64bit(ummm.. 64).  how could you tell when one
data "denomination" starts and stops.  How could you sync getting a constant
stream of 1's..  What would be the representation of On/Off, True/False, Yes/No?

5. Even with the size of hard drives today, how could you store any sizable
amount of (usable) data using only 1's....(refer back to your original math
example.).
The amount of 1's that you'd have to store to represent the number 100 or
1,000,000... you'd use nearly a meg of disk space just to store the number
1,000,000 which in binary is 11110100001001000000 = 20bits = 1 word + 1 nibble > 2 bytes + 1 nibble. How would the program be stored using all 1's?  And how much
disk space, time and memory would it take to read, write, decode and process a
data steam of all 1's.... a DC signal.

6. Even given the fast processors of today, How could you expect to process any
real amount of (usable) data given point #5.  I mean forget using Fourier
Transforms or even representing Pi to more than 2 decimal places.  Both for speed
and data space used.  Your system would crash onto itself just for all the
reading and
writing it would have to do.  Even if it could communicate to memory or disk
subsystems using only ones.

7. An All-Logic-1 system would be a pure DC computer.  Every State always on.
No modulation, no level shifts, nothing to phase compare.... Just a current sink
Totally impractical and improbable.  it's like having a light switch with both
positions being 'On'...  causes high electric bills and loss of valuable sleep.
ie: waste of money, waste of time.

--
--------------------------------- --- -- -
Posted with NewsLeecher v3.8 Final
Web @http://www.newsleecher.com/?usenet
------------------- ----- ---- -- -
Thanks for the explanation and clarifications.
 
On Nov 24, 1:32 pm, junee <azeez...@gmail.com> wrote:
processors
and me too, they are great, but I'm bored with them.

I'm interested in designing the circuitry,with out the concept
of digital,
I mean there are only 1's and no 0's

The main part ALU of cpu is responsible
for all arithmetic calculations.

If I want to design the ALU from scratch with
out the concept of binary, instead only with single
signal,
what are the difficulties one has to face.?
for example see this
Eg: 5+5=10, instead of that
can I go with this technique
11111 + 111111 = 1111111111
It's called "the unary number system". Superseded by binary in 1937 or
so when a guy built the first binary adder in his kitchen. (I'm not
making this up!)

Unary/tally systems are used in various logic and encoding systems
where you know that the largest number will be something very small
and reasonable. Variations on it, especially in the way binary numbers
get decoded from it, make up things like "priority encoders" (which IS
a variation of the simple unary system). It's not very economical for
numbers bigger than your example.

Tim.
 
On Nov 24, 5:54 pm, John Larkin
<jjlar...@highNOTlandTHIStechnologyPART.com> wrote:
IBM also, for some strange reason, used "star code" in parts of some
machines, like the 1401, where each decimal digit 0..9 was represented
by two bits set out of five. There are, I think, exactly 10 such
codes.
If your logic is based on decimal digits and 2-input AND gates are
very economical it works out very nicely. I didn't know what it was
called or that it was in the 1401.

Another common representation that is easily decoded is what you get
if you build a 5-bit-wide twisted ring counter. This is very widely
used for decoded decimal counters (e.g. the CD4017 has this design
internally), and Spehro even posted a circuit, a few years back in one
of my threads, where the decoders are not even AND gates but are in
fact the output LED's. Economically brilliant.

Tim.
 
On Nov 24, 1:32 pm, junee <azeez...@gmail.com> wrote:
processors
and me too, they are great, but I'm bored with them.

I'm interested in designing the circuitry,with out the concept
of digital,
I mean there are only 1's and no 0's

The main part ALU of cpu is responsible
for all arithmetic calculations.

If I want to design the ALU from scratch with
out the concept of binary, instead only with single
signal,
what are the difficulties one has to face.?
for example see this
Eg: 5+5=10, instead of that
can I go with this technique
11111 + 111111 = 1111111111

Thanks.
What you have is sometimes called "base-1" arithmetic. I've only seen
it used in simple examples for programming a Turing Machine. Maybe
that's your calling ;)

http://en.wikipedia.org/wiki/Turing_machine

--
Joe

--
Joe
 
On Tue, 25 Nov 2008 06:39:16 -0800 (PST), "J.A. Legris"
<jalegris@sympatico.ca> wrote:

On Nov 24, 1:32 pm, junee <azeez...@gmail.com> wrote:
These days people are mostly using binary[digital] processors
and me too, they are great, but I'm bored with them.

I'm interested in designing the circuitry,with out the concept
of digital,
I mean there are only 1's and no 0's

The main part ALU of cpu is responsible
for all arithmetic calculations.

If I want to design the ALU from scratch with
out the concept of binary, instead only with single
signal,
what are the difficulties one has to face.?
for example see this
Eg: 5+5=10, instead of that
can I go with this technique
11111 + 111111 = 1111111111

Thanks.

What you have is sometimes called "base-1" arithmetic. I've only seen
it used in simple examples for programming a Turing Machine. Maybe
that's your calling ;)

http://en.wikipedia.org/wiki/Turing_machine

--
Joe

Along similar minimalist lines:
http://en.wikipedia.org/wiki/Brainfuck
 
On Tue, 25 Nov 2008 06:39:16 -0800 (PST), "J.A. Legris"
<jalegris@sympatico.ca> wrote:

On Nov 24, 1:32 pm, junee <azeez...@gmail.com> wrote:
These days people are mostly using binary[digital] processors
and me too, they are great, but I'm bored with them.

I'm interested in designing the circuitry,with out the concept
of digital,
I mean there are only 1's and no 0's

The main part ALU of cpu is responsible
for all arithmetic calculations.

If I want to design the ALU from scratch with
out the concept of binary, instead only with single
signal,
what are the difficulties one has to face.?
for example see this
Eg: 5+5=10, instead of that
can I go with this technique
11111 + 111111 = 1111111111
^^^^^^ ^^^^^^^^^^

Thanks.

What you have is sometimes called "base-1" arithmetic. I've only seen
it used in simple examples for programming a Turing Machine. Maybe
that's your calling ;)

http://en.wikipedia.org/wiki/Turing_machine

--
Joe
"Calling"? Maybe not. Check his arithmetic ;-)

...Jim Thompson
--
| James E.Thompson, P.E. | mens |
| Analog Innovations, Inc. | et |
| Analog/Mixed-Signal ASIC's and Discrete Systems | manus |
| Phoenix, Arizona 85048 Skype: Contacts Only | |
| Voice:(480)460-2350 Fax: Available upon request | Brass Rat |
| E-mail Icon at http://www.analog-innovations.com | 1962 |

I love to cook with wine Sometimes I even put it in the food
 
On Tue, 25 Nov 2008 06:24:23 -0800 (PST), Tim Shoppa
<shoppa@trailing-edge.com> wrote:

On Nov 24, 5:54 pm, John Larkin
jjlar...@highNOTlandTHIStechnologyPART.com> wrote:
IBM also, for some strange reason, used "star code" in parts of some
machines, like the 1401, where each decimal digit 0..9 was represented
by two bits set out of five. There are, I think, exactly 10 such
codes.

If your logic is based on decimal digits and 2-input AND gates are
very economical it works out very nicely. I didn't know what it was
called or that it was in the 1401.
The ALU section of the 1401 was called "main star", and as you suggest
probably did its math in star mode. Stuff outside was mostly BCD or
character codes. The beast directly executed character strings out of
core, so "assembly programming" was really machine code programming.
You could type these programs directly into core via the Selectric.

I designed a SAR ADC, using transistors, and interfaced it to a 1401.
I hasten to point out that the machine was already an antique when I
did it. We hacked it in to the logic that was supposed to interface
the realtime clock, which they didn't have. The 1401 RTC was actually
a clockwork mechanism that drove switch contacts.

Geez, I've done a lot of weird stuff.

Another common representation that is easily decoded is what you get
if you build a 5-bit-wide twisted ring counter. This is very widely
used for decoded decimal counters (e.g. the CD4017 has this design
internally), and Spehro even posted a circuit, a few years back in one
of my threads, where the decoders are not even AND gates but are in
fact the output LED's. Economically brilliant.
Here are a few tube counter things; 7M zip file. Some use the neon
bulb thresholds as part of the decode logic.

ftp://66.117.156.8/Counters.zip

John
 
On Tue, 25 Nov 2008 06:08:21 -0800 (PST), Tim Shoppa
<shoppa@trailing-edge.com> wrote:

On Nov 24, 1:32 pm, junee <azeez...@gmail.com> wrote:
These days people are mostly using binary[digital] processors
and me too, they are great, but I'm bored with them.

I'm interested in designing the circuitry,with out the concept
of digital,
I mean there are only 1's and no 0's

The main part ALU of cpu is responsible
for all arithmetic calculations.

If I want to design the ALU from scratch with
out the concept of binary, instead only with single
signal,
what are the difficulties one has to face.?
for example see this
Eg: 5+5=10, instead of that
can I go with this technique
11111 + 111111 = 1111111111

It's called "the unary number system". Superseded by binary in 1937 or
so when a guy built the first binary adder in his kitchen. (I'm not
making this up!)

A serial adder is easy: let all those bits through, then let all those
other bits through.

John
 
In article <bl9oi4l66e56vtmkc060rme722r71cl5h7@4ax.com>,
jjlarkin@highNOTlandTHIStechnologyPART.com says...>
On Tue, 25 Nov 2008 06:24:23 -0800 (PST), Tim Shoppa
shoppa@trailing-edge.com> wrote:

On Nov 24, 5:54 pm, John Larkin
jjlar...@highNOTlandTHIStechnologyPART.com> wrote:
IBM also, for some strange reason, used "star code" in parts of some
machines, like the 1401, where each decimal digit 0..9 was represented
by two bits set out of five. There are, I think, exactly 10 such
codes.

If your logic is based on decimal digits and 2-input AND gates are
very economical it works out very nicely. I didn't know what it was
called or that it was in the 1401.


The ALU section of the 1401 was called "main star", and as you suggest
probably did its math in star mode. Stuff outside was mostly BCD or
character codes. The beast directly executed character strings out of
core, so "assembly programming" was really machine code programming.
You could type these programs directly into core via the Selectric.
For small values of "character string" and "assembly programming",
perhaps. Wasn't the Selectric well after the 1401?

The 1620 could add in whatever format floated your boat. It was
known as the "Cadet" because it Couldn't Add and Didn't Even Try.
Addition was done in a lookup table. ;-)

I designed a SAR ADC, using transistors, and interfaced it to a 1401.
I hasten to point out that the machine was already an antique when I
did it. We hacked it in to the logic that was supposed to interface
the realtime clock, which they didn't have. The 1401 RTC was actually
a clockwork mechanism that drove switch contacts.
I know several who did similar things on 1130s. They were often
used for "instrumentation".

<snip>
 
On Tue, 25 Nov 2008 10:49:51 -0600, krw <krw@att.zzzzzzzzz> wrote:

In article <bl9oi4l66e56vtmkc060rme722r71cl5h7@4ax.com>,
jjlarkin@highNOTlandTHIStechnologyPART.com says...
On Tue, 25 Nov 2008 06:24:23 -0800 (PST), Tim Shoppa
shoppa@trailing-edge.com> wrote:

On Nov 24, 5:54 pm, John Larkin
jjlar...@highNOTlandTHIStechnologyPART.com> wrote:
IBM also, for some strange reason, used "star code" in parts of some
machines, like the 1401, where each decimal digit 0..9 was represented
by two bits set out of five. There are, I think, exactly 10 such
codes.

If your logic is based on decimal digits and 2-input AND gates are
very economical it works out very nicely. I didn't know what it was
called or that it was in the 1401.


The ALU section of the 1401 was called "main star", and as you suggest
probably did its math in star mode. Stuff outside was mostly BCD or
character codes. The beast directly executed character strings out of
core, so "assembly programming" was really machine code programming.
You could type these programs directly into core via the Selectric.

For small values of "character string" and "assembly programming",
perhaps. Wasn't the Selectric well after the 1401?

The 1620 could add in whatever format floated your boat. It was
known as the "Cadet" because it Couldn't Add and Didn't Even Try.
Addition was done in a lookup table. ;-)

I designed a SAR ADC, using transistors, and interfaced it to a 1401.
I hasten to point out that the machine was already an antique when I
did it. We hacked it in to the logic that was supposed to interface
the realtime clock, which they didn't have. The 1401 RTC was actually
a clockwork mechanism that drove switch contacts.

I know several who did similar things on 1130s. They were often
used for "instrumentation".
Those were binary machines, no? 16 bits, maybe.

I suppose google knows all this stuff.

John
 
In article <kkboi4psvfqie2fg8iku4m2kdoohshtp7q@4ax.com>,
jjlarkin@highNOTlandTHIStechnologyPART.com says...>
On Tue, 25 Nov 2008 10:49:51 -0600, krw <krw@att.zzzzzzzzz> wrote:

In article <bl9oi4l66e56vtmkc060rme722r71cl5h7@4ax.com>,
jjlarkin@highNOTlandTHIStechnologyPART.com says...
On Tue, 25 Nov 2008 06:24:23 -0800 (PST), Tim Shoppa
shoppa@trailing-edge.com> wrote:

On Nov 24, 5:54 pm, John Larkin
jjlar...@highNOTlandTHIStechnologyPART.com> wrote:
IBM also, for some strange reason, used "star code" in parts of some
machines, like the 1401, where each decimal digit 0..9 was represented
by two bits set out of five. There are, I think, exactly 10 such
codes.

If your logic is based on decimal digits and 2-input AND gates are
very economical it works out very nicely. I didn't know what it was
called or that it was in the 1401.


The ALU section of the 1401 was called "main star", and as you suggest
probably did its math in star mode. Stuff outside was mostly BCD or
character codes. The beast directly executed character strings out of
core, so "assembly programming" was really machine code programming.
You could type these programs directly into core via the Selectric.

For small values of "character string" and "assembly programming",
perhaps. Wasn't the Selectric well after the 1401?

The 1620 could add in whatever format floated your boat. It was
known as the "Cadet" because it Couldn't Add and Didn't Even Try.
Addition was done in a lookup table. ;-)

I designed a SAR ADC, using transistors, and interfaced it to a 1401.
I hasten to point out that the machine was already an antique when I
did it. We hacked it in to the logic that was supposed to interface
the realtime clock, which they didn't have. The 1401 RTC was actually
a clockwork mechanism that drove switch contacts.

I know several who did similar things on 1130s. They were often
used for "instrumentation".


Those were binary machines, no? 16 bits, maybe.
Yes, 1130s were binary. They were popular for that sort of
"project" though.

> I suppose google knows all this stuff.
 
On Tue, 25 Nov 2008 12:11:35 -0600, krw <krw@att.zzzzzzzzz> wrote:

In article <kkboi4psvfqie2fg8iku4m2kdoohshtp7q@4ax.com>,
jjlarkin@highNOTlandTHIStechnologyPART.com says...
On Tue, 25 Nov 2008 10:49:51 -0600, krw <krw@att.zzzzzzzzz> wrote:

In article <bl9oi4l66e56vtmkc060rme722r71cl5h7@4ax.com>,
jjlarkin@highNOTlandTHIStechnologyPART.com says...
On Tue, 25 Nov 2008 06:24:23 -0800 (PST), Tim Shoppa
shoppa@trailing-edge.com> wrote:

On Nov 24, 5:54 pm, John Larkin
jjlar...@highNOTlandTHIStechnologyPART.com> wrote:
IBM also, for some strange reason, used "star code" in parts of some
machines, like the 1401, where each decimal digit 0..9 was represented
by two bits set out of five. There are, I think, exactly 10 such
codes.

If your logic is based on decimal digits and 2-input AND gates are
very economical it works out very nicely. I didn't know what it was
called or that it was in the 1401.


The ALU section of the 1401 was called "main star", and as you suggest
probably did its math in star mode. Stuff outside was mostly BCD or
character codes. The beast directly executed character strings out of
core, so "assembly programming" was really machine code programming.
You could type these programs directly into core via the Selectric.

For small values of "character string" and "assembly programming",
perhaps. Wasn't the Selectric well after the 1401?

The 1620 could add in whatever format floated your boat. It was
known as the "Cadet" because it Couldn't Add and Didn't Even Try.
Addition was done in a lookup table. ;-)

I designed a SAR ADC, using transistors, and interfaced it to a 1401.
I hasten to point out that the machine was already an antique when I
did it. We hacked it in to the logic that was supposed to interface
the realtime clock, which they didn't have. The 1401 RTC was actually
a clockwork mechanism that drove switch contacts.

I know several who did similar things on 1130s. They were often
used for "instrumentation".


Those were binary machines, no? 16 bits, maybe.

Yes, 1130s were binary. They were popular for that sort of
"project" though.

I suppose google knows all this stuff.
I recall that the 1130 was the first "personal" computer, and maybe
the first computer that saw serious use in realtime process control.

IBM sure had a lot of very strange machines, up until the 360 line
lent a bit of coherence. But IBM sort of lost interest in process
control.

John
 
In article <9phoi4p4tha9ccj68vf29vuin6d8687avn@4ax.com>,
jjlarkin@highNOTlandTHIStechnologyPART.com says...>
On Tue, 25 Nov 2008 12:11:35 -0600, krw <krw@att.zzzzzzzzz> wrote:

In article <kkboi4psvfqie2fg8iku4m2kdoohshtp7q@4ax.com>,
jjlarkin@highNOTlandTHIStechnologyPART.com says...
On Tue, 25 Nov 2008 10:49:51 -0600, krw <krw@att.zzzzzzzzz> wrote:

In article <bl9oi4l66e56vtmkc060rme722r71cl5h7@4ax.com>,
jjlarkin@highNOTlandTHIStechnologyPART.com says...
On Tue, 25 Nov 2008 06:24:23 -0800 (PST), Tim Shoppa
shoppa@trailing-edge.com> wrote:

On Nov 24, 5:54 pm, John Larkin
jjlar...@highNOTlandTHIStechnologyPART.com> wrote:
IBM also, for some strange reason, used "star code" in parts of some
machines, like the 1401, where each decimal digit 0..9 was represented
by two bits set out of five. There are, I think, exactly 10 such
codes.

If your logic is based on decimal digits and 2-input AND gates are
very economical it works out very nicely. I didn't know what it was
called or that it was in the 1401.


The ALU section of the 1401 was called "main star", and as you suggest
probably did its math in star mode. Stuff outside was mostly BCD or
character codes. The beast directly executed character strings out of
core, so "assembly programming" was really machine code programming..
You could type these programs directly into core via the Selectric.

For small values of "character string" and "assembly programming",
perhaps. Wasn't the Selectric well after the 1401?

The 1620 could add in whatever format floated your boat. It was
known as the "Cadet" because it Couldn't Add and Didn't Even Try.
Addition was done in a lookup table. ;-)

I designed a SAR ADC, using transistors, and interfaced it to a 1401.
I hasten to point out that the machine was already an antique when I
did it. We hacked it in to the logic that was supposed to interface
the realtime clock, which they didn't have. The 1401 RTC was actually
a clockwork mechanism that drove switch contacts.

I know several who did similar things on 1130s. They were often
used for "instrumentation".


Those were binary machines, no? 16 bits, maybe.

Yes, 1130s were binary. They were popular for that sort of
"project" though.

I suppose google knows all this stuff.

I recall that the 1130 was the first "personal" computer, and maybe
the first computer that saw serious use in realtime process control.

IBM sure had a lot of very strange machines, up until the 360 line
lent a bit of coherence. But IBM sort of lost interest in process
control.
They didn't really lose interest as much as DEC cleaned their
clock, until the S/38 (and then 43XX) and then DEC lost their way.
IBM had the System-7 and Series-1, though they were a mess compared
to the DEC offerings.

--
Keith
 
On Tue, 25 Nov 2008 13:13:04 -0600, krw <krw@att.zzzzzzzzz> wrote:

In article <9phoi4p4tha9ccj68vf29vuin6d8687avn@4ax.com>,
jjlarkin@highNOTlandTHIStechnologyPART.com says...
[snip]

IBM sure had a lot of very strange machines, up until the 360 line
lent a bit of coherence. But IBM sort of lost interest in process
control.

They didn't really lose interest as much as DEC cleaned their
clock, until the S/38 (and then 43XX) and then DEC lost their way.
IBM had the System-7 and Series-1, though they were a mess compared
to the DEC offerings.
DEC sure did "lose their way". Isn't it amazing how dominant
companies can come crashing down so rapidly.

(Alan Kotok was one of my buddies at MIT... model railroad club ;-)

...Jim Thompson
--
| James E.Thompson, P.E. | mens |
| Analog Innovations, Inc. | et |
| Analog/Mixed-Signal ASIC's and Discrete Systems | manus |
| Phoenix, Arizona 85048 Skype: Contacts Only | |
| Voice:(480)460-2350 Fax: Available upon request | Brass Rat |
| E-mail Icon at http://www.analog-innovations.com | 1962 |

I love to cook with wine Sometimes I even put it in the food
 
"Jim Thompson" <To-Email-Use-The-Envelope-Icon@My-Web-Site.com> wrote in message
news:u1koi4hkgqapsjn1lbk9n0carada5188cs@4ax.com...
On Tue, 25 Nov 2008 13:13:04 -0600, krw <krw@att.zzzzzzzzz> wrote:

In article <9phoi4p4tha9ccj68vf29vuin6d8687avn@4ax.com>,
jjlarkin@highNOTlandTHIStechnologyPART.com says...
[snip]

IBM sure had a lot of very strange machines, up until the 360 line
lent a bit of coherence. But IBM sort of lost interest in process
control.

They didn't really lose interest as much as DEC cleaned their
clock, until the S/38 (and then 43XX) and then DEC lost their way.
IBM had the System-7 and Series-1, though they were a mess compared
to the DEC offerings.

DEC sure did "lose their way". Isn't it amazing how dominant
companies can come crashing down so rapidly.

(Alan Kotok was one of my buddies at MIT... model railroad club ;-)

...Jim Thompson
--
| James E.Thompson, P.E. | mens |
| Analog Innovations, Inc. | et |
| Analog/Mixed-Signal ASIC's and Discrete Systems | manus |
| Phoenix, Arizona 85048 Skype: Contacts Only | |
| Voice:(480)460-2350 Fax: Available upon request | Brass Rat |
| E-mail Icon at http://www.analog-innovations.com | 1962 |

I love to cook with wine Sometimes I even put it in the food

Interesting read on the Analog Devices MACSYM system. We're still using a few...

http://web.mit.edu/6.933/www/Fall2000/macsym.pdf
 
In article <u1koi4hkgqapsjn1lbk9n0carada5188cs@4ax.com>, To-Email-
Use-The-Envelope-Icon@My-Web-Site.com says...>
On Tue, 25 Nov 2008 13:13:04 -0600, krw <krw@att.zzzzzzzzz> wrote:

In article <9phoi4p4tha9ccj68vf29vuin6d8687avn@4ax.com>,
jjlarkin@highNOTlandTHIStechnologyPART.com says...
[snip]

IBM sure had a lot of very strange machines, up until the 360 line
lent a bit of coherence. But IBM sort of lost interest in process
control.

They didn't really lose interest as much as DEC cleaned their
clock, until the S/38 (and then 43XX) and then DEC lost their way.
IBM had the System-7 and Series-1, though they were a mess compared
to the DEC offerings.

DEC sure did "lose their way". Isn't it amazing how dominant
companies can come crashing down so rapidly.
I saw the demise of DEC coming a long way off. From a product
perspective, the "Rainbow" was the nail in the coffin, IMO. I was
not much impressed by our VAX-11/780, either. The examples of
their arrogance and management incompetence is enough to fill a
swamp but they couldn't manage that either.

(Alan Kotok was one of my buddies at MIT... model railroad club ;-)
--
Keith
 
On Mon, 24 Nov 2008 11:06:22 -0800 (PST), junee <azeez541@gmail.com>
wrote:

On Nov 24, 1:44 pm, paas <pabloalvarezsanc...@gmail.com> wrote:
can I go with this technique
11111 + 111111 = 1111111111

Thanks.

Interesting idea, but it is a falacy in fact. Not having a '1' is can
be represented as having '0'. What  you are doing is replacing the '0'
by a blanck space but it the concept of '0' is still there.

where did I said I'll use '0's

lets see another example

6/2=3, instead of that

111111 / 111 = 11

got it..?

Thanks.
what about other ppl ?
any ideas from u
How do you indicate the end of a number?
 
On Mon, 24 Nov 2008 15:27:56 -0800, "Joel Koltner"
<zapwireDASHgroups@yahoo.com> wrote:

"Vladimir Vassilevsky" <antispam_bogus@hotmail.com> wrote in message
news:6uGWk.4982$8_3.1230@flpi147.ffdc.sbc.com...
BTW, I have heard many times of the idea of asynchronous CPU. So the results
of operations are not synchronized to a clock, but propagate further at the
natural speed. The synchronization is done by delay matching at the critical
points. Ideally, that should work faster then the clocked logic; perhaps the
variance of the delays kills the idea.

Intel CPUs use some "chunks" of asynchronous logic for, e.g., instruction
decoders, but what I've heard the presenters of papers on this topic stress is
that their goal is usually power reduction much moreso than speed.

It seems that there should be a textbook of asynchronous logic design out
there by now... besides going over the usual discussion of how you avoid race
conditions with your min terms/max terms, it'd also discuss the various clever
schemes people have come up with to do handshaking between multiple
asynchronous modules, perhaps discuss various historical results (like the
Hennessy & Patterson book does... when I took a class using it in college
years ago, the professor was pretty darned good so typically there "meat" of
H&P was just review anyway -- and it's not like the math was hard --, but I
always looked forward to their end--of-chapter "real life examples"
discussions), etc.

---Joel
There is and there isn't. It is presumed covered in the combinatorial
logic and sequential logic courses. But, of course it isn't really
covered. Most state machine courses are trash as well.
 
On Tue, 25 Nov 2008 19:51:36 -0800, JosephKK <quiettechblue@yahoo.com>
wrote:

On Mon, 24 Nov 2008 15:27:56 -0800, "Joel Koltner"
zapwireDASHgroups@yahoo.com> wrote:

"Vladimir Vassilevsky" <antispam_bogus@hotmail.com> wrote in message
news:6uGWk.4982$8_3.1230@flpi147.ffdc.sbc.com...
BTW, I have heard many times of the idea of asynchronous CPU. So the results
of operations are not synchronized to a clock, but propagate further at the
natural speed. The synchronization is done by delay matching at the critical
points. Ideally, that should work faster then the clocked logic; perhaps the
variance of the delays kills the idea.

Intel CPUs use some "chunks" of asynchronous logic for, e.g., instruction
decoders, but what I've heard the presenters of papers on this topic stress is
that their goal is usually power reduction much moreso than speed.

It seems that there should be a textbook of asynchronous logic design out
there by now... besides going over the usual discussion of how you avoid race
conditions with your min terms/max terms, it'd also discuss the various clever
schemes people have come up with to do handshaking between multiple
asynchronous modules, perhaps discuss various historical results (like the
Hennessy & Patterson book does... when I took a class using it in college
years ago, the professor was pretty darned good so typically there "meat" of
H&P was just review anyway -- and it's not like the math was hard --, but I
always looked forward to their end--of-chapter "real life examples"
discussions), etc.

---Joel


There is and there isn't. It is presumed covered in the combinatorial
logic and sequential logic courses. But, of course it isn't really
covered. Most state machine courses are trash as well.
ME: Use a transparent latch.

Xilinx Software: WARNING -- You are using a transparent latch!

John
 

Welcome to EDABoard.com

Sponsor

Back
Top