Driver to drive?

rickman wrote:
My point is that power will be the primary issue with CPUs
in the coming years. Even today many are not so happy with
cell phones that have to be charged more than once a day.
Processing speed will be taking a secondary seat to power
consumption and the product mix in the market will reflect
that; fewer desktops and laptops with more tablets, PDAs and
cell phones.

Rick
Who knows, bloated inefficient software may become unacceptable.
 
On 1/3/2013 4:18 PM, Phil Hobbs wrote:

The American Optical taper is used for mounting tools on optical
polishing machines.

None of my optical finishing or optomechanics books has a specification,
but I did come across these folks:

http://www.lensmastertooling.com/grindingandpolishing.html

who supply laps and grinding tools with an AO taper. You might ask them.

Cheers

Phil Hobbs
Phil,

Thank you. I'll give them a call tomorrow. I'm working with a company
that makes an A-O taper reamer to build the tools that folks use... But
their reamer drawings are off by almost 0.020" at the large end of the
taper as compared to the tool that is apparently made by said reamer.

....And I checked the small end too - which I lined up and matched the
diameters.

Fun stuff!

Again, thank you. Much appreciated.


--
http://tinyurl.com/My-Official-Response

Regards,
Joe Agro, Jr.
(800) 871-5022 x113
01.908.542.0244
Flagship Site: http://www.Drill-HQ.com
Automatic / Pneumatic Drills: http://www.AutoDrill.com
Multiple Spindle Drills: http://www.Multi-Drill.com
Production Tapping: http://www.Drill-HQ.com/?page_id=226
VIDEOS: http://www.youtube.com/user/AutoDrill
FACEBOOK: http://www.facebook.com/AutoDrill
TWITTER: http://twitter.com/AutoDrill

V8013-R
 
On Thu, 03 Jan 2013 15:06:24 -0500, rickman <gnuarm@gmail.com> wrote:

On 1/2/2013 11:04 PM, krw@attt.bizz wrote:
On Wed, 02 Jan 2013 18:42:03 -0500, rickman<gnuarm@gmail.com> wrote:

On 1/2/2013 6:30 PM, krw@attt.bizz wrote:
On Wed, 02 Jan 2013 18:22:41 -0500, rickman<gnuarm@gmail.com> wrote:

I was a bit surprised by the huge adoption of mobile computing over the
last 10 years. Having gotten over my surprise, I expect the PC to
become a much smaller player to the handheld and table form factors with
the resulting emphasis on very low power and the resulting change in
emphasis in processing technology from density to power consumption.

I've been hearing that song for a couple of decades, too.

The reality is that we can't know what the next "big thing" is. We do
know it's not the current "big thing".


Not sure why you say that is a "couple of decades" old. In '93 they
were still pushing for "longer, lower, wider" to quote the auto
industry's motto during the years when they blithely promoted fancier
cars with shiny doodads instead of safety or lower pollution.

Perhaps not '93, but by '95 the main issue of PCs had turned to power.
It didn't get fixed, mainly because the packaging people were better
than anyone gave the credit for.

PCs
didn't reach their peak power dissipation until they were over 100 Watts
in what, 2000 something? Now they hardly have a PC CPU that uses 100
Watts. I think even the ultra powerful server CPUs try to keep the
power consumption down as it costs more to cool the equipment than it
does to power it.

Just because the problem wasn't "solved" (it really never was - people
just got bored with balls-to-the-wall performance), doesn't mean it
wasn't a primary concern.

I'm not sure what your point is. There was a tradeoff between building
chips which were larger and ran faster and chips that used less power.
The choice isn't larger or less power (they're got larger). The
choice is faster or less power.

The fact that they continued to build the faster chips and didn't build
the lower power ones says to me they are only now being forced by the
market to go for low power at the expense of performance.
They didn't. Processsors aren't faster now than they were five years
ago. They went for lower power. That's the point.


My point is that power will be the primary issue with CPUs in the coming
years.
Well, fuckin' *duh*. It's been the primary issue for almost a decade.
Speed has hit a brick wall.

Even today many are not so happy with cell phones that have to
be charged more than once a day.
It's a pretty piss poor cell phone that has to be charged more than
once a day, unless it's serving as a hot spot, or something similar.

Processing speed will be taking a
secondary seat to power consumption and the product mix in the market
will reflect that; fewer desktops and laptops with more tablets, PDAs
and cell phones.
It's ain't the processor that's the power drain in cell phones.
 
On Thu, 03 Jan 2013 15:31:00 -0600, Richard Owlett
<rowlett@pcnetinc.com> wrote:

rickman wrote:
[snip]

My point is that power will be the primary issue with CPUs
in the coming years. Even today many are not so happy with
cell phones that have to be charged more than once a day.
Processing speed will be taking a secondary seat to power
consumption and the product mix in the market will reflect
that; fewer desktops and laptops with more tablets, PDAs and
cell phones.

Rick

Who knows, bloated inefficient software may become unacceptable.
Not bloody likely. The demand is for a new generation of phones every
six months. That's hardly conducive to any rigorous development cycle
and I don't see it changing.
 
In article <kc2fga$j31$1@dont-email.me>,
rickman <gnuarm@gmail.com> writes:
fore.
From what I've heard, the real issues are not advances in process
technology, but that we don't have any good use for more transistors in
CPUs or more CPUs on a chip. Plus, the market has been changing so that
more transistors aren't what is needed, but less power consumption for
the highly portable devices.
Servers can always use more CPUs per chip.

Servers are power limited too. That's nice since they don't have to
design separate CPUs any more.


--
These are my opinions. I hate spam.
 
In article <kc2fga$j31$1@dont-email.me>,
rickman <gnuarm@gmail.com> writes:

Regardless of whether the curve is softening a little or not, I am
amazed that it has lasted as long as it has. Lately I have been hearing
that we may really be reaching the end of what can be done "easily",
meaning it is getting a lot harder to keep extending that curve. I
understand we will be seeing some fundamental limits becoming real
problems around 10 nm... but I have heard that song play on the radio
before.
I see 2 interesting limits coming soon:

The width of a line will get below the diameter of an atom.

The cost of a fab line will exceed the GDP.

I'm not sure which one will happen first.

--
These are my opinions. I hate spam.
 
On 1/3/2013 6:50 PM, krw@attt.bizz wrote:
On Thu, 03 Jan 2013 15:06:24 -0500, rickman<gnuarm@gmail.com> wrote:

On 1/2/2013 11:04 PM, krw@attt.bizz wrote:
On Wed, 02 Jan 2013 18:42:03 -0500, rickman<gnuarm@gmail.com> wrote:

On 1/2/2013 6:30 PM, krw@attt.bizz wrote:
On Wed, 02 Jan 2013 18:22:41 -0500, rickman<gnuarm@gmail.com> wrote:

I was a bit surprised by the huge adoption of mobile computing over the
last 10 years. Having gotten over my surprise, I expect the PC to
become a much smaller player to the handheld and table form factors with
the resulting emphasis on very low power and the resulting change in
emphasis in processing technology from density to power consumption.

I've been hearing that song for a couple of decades, too.

The reality is that we can't know what the next "big thing" is. We do
know it's not the current "big thing".


Not sure why you say that is a "couple of decades" old. In '93 they
were still pushing for "longer, lower, wider" to quote the auto
industry's motto during the years when they blithely promoted fancier
cars with shiny doodads instead of safety or lower pollution.

Perhaps not '93, but by '95 the main issue of PCs had turned to power.
It didn't get fixed, mainly because the packaging people were better
than anyone gave the credit for.

PCs
didn't reach their peak power dissipation until they were over 100 Watts
in what, 2000 something? Now they hardly have a PC CPU that uses 100
Watts. I think even the ultra powerful server CPUs try to keep the
power consumption down as it costs more to cool the equipment than it
does to power it.

Just because the problem wasn't "solved" (it really never was - people
just got bored with balls-to-the-wall performance), doesn't mean it
wasn't a primary concern.

I'm not sure what your point is. There was a tradeoff between building
chips which were larger and ran faster and chips that used less power.

The choice isn't larger or less power (they're got larger). The
choice is faster or less power.

The fact that they continued to build the faster chips and didn't build
the lower power ones says to me they are only now being forced by the
market to go for low power at the expense of performance.

They didn't. Processsors aren't faster now than they were five years
ago. They went for lower power. That's the point.
You started talking about this being the issue some twenty years ago,
now you are talking about five years ago. Which are we discussing?


My point is that power will be the primary issue with CPUs in the coming
years.

Well, fuckin' *duh*. It's been the primary issue for almost a decade.
Speed has hit a brick wall.
Now you say 10 years ago. Speed has hit limits because of limits to
memory bandwidth. Speed is largely increased by adding more transistors
to allow more parallelism. Some 5 years ago they got to the limit of
what can be done efficiently with a single CPU and a virtually unlimited
supply of transistors. So they started adding more CPUs. The memory
bandwidth couldn't keep up with this past two processors, so quad cores
start to give diminishing returns and what I have read is that eight
core chips have very little performance improvement over quads. That is
why speed has stopped improving, not clock speeds. Clock speeds stopped
making progress some 12 years ago when the Pentium 4 didn't
significantly outperform the PIII and AMD ate Intel's lunch with a
shorter pipelined, but slower clocked Athalon. But processor speeds
continued to rise, (processing speeds, not clock speeds) for a few more
years.


Even today many are not so happy with cell phones that have to
be charged more than once a day.

It's a pretty piss poor cell phone that has to be charged more than
once a day, unless it's serving as a hot spot, or something similar.
Yes, the iPhone has been selling very poorly due to this problem. Apple
may be going out of business soon. All sarcasm aside, Apple has been
widely criticized for the power drain on these phones.


Processing speed will be taking a
secondary seat to power consumption and the product mix in the market
will reflect that; fewer desktops and laptops with more tablets, PDAs
and cell phones.

It's ain't the processor that's the power drain in cell phones.
Do you have any numbers? Has anyone analyzed an iPhone to see where the
power is going?

My point is that with the shift from plugged in units to mobile units,
the power issue will be dominant over the speed issues. If they can't
maintain the power levels or lower them, processor speeds will not
increase. Of course, the processors in the mobile devices are at the
point desktop processors were some 12 years ago. So if history repeats
itself, we *won't* see significant processor speed increases in mobile
products after a few more years because the architectures are maxed out
and process improvements can't give us anything more.

Rick
 
Hal Murray wrote:

In article <kc2fga$j31$1@dont-email.me>,
rickman <gnuarm@gmail.com> writes:

Regardless of whether the curve is softening a little or not, I am
amazed that it has lasted as long as it has. Lately I have been
hearing that we may really be reaching the end of what can be done
"easily",
meaning it is getting a lot harder to keep extending that curve. I
understand we will be seeing some fundamental limits becoming real
problems around 10 nm... but I have heard that song play on the radio
before.

I see 2 interesting limits coming soon:

The width of a line will get below the diameter of an atom.

The cost of a fab line will exceed the GDP.

I'm not sure which one will happen first.
Moore's law is "doubling the number of transistors". There is still a
third dimension, if you end up with physical limits on a flat die, you
can start stacking transistors in two layers - this still would qualify
as "doubling".

--
Bernd Paysan
"If you want it done right, you have to do it yourself"
http://bernd-paysan.de/

From aaa@aaa Mon Jan 01 00:00:00 1997
X-KNode-Overview: Re: any misc chip desktop pc yet? co
 
On Sat, 05 Jan 2013 15:45:18 -0800, miso wrote:

On 1/5/2013 1:30 PM, Tim Wescott wrote:
On Sat, 05 Jan 2013 13:36:46 -0600, John S wrote:

On 1/5/2013 12:59 PM, John S wrote:
I've looked for info on this and found:

http://www.analog.com/library/analogDialogue/archives/44-11/
phase_coherent.html


The article states:

Phase-coherence is preferred because the phase is remembered.
Oops! This was my conclusion ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

Phase-continuous is lower bandwidth.

They do not say why phase-coherence is preferred.

If one is doing FSK signaling, why is phase-coherence preferred?

Thanks.

Well, the article does say that phase coherence is preferred for some
services.

The theory is that if you have two independent generators (their A and
B signals) that you switch between, then it is easier to do coherent
demodulation, which gains you a few dB of performance in bit error rate
vs. SNR.

I'm not sure whether they're glossing over or I didn't read it, but for
certain kinds of FSK (MSK comes to mind) you can do coherent
demodulation from a phase-continuous transmitted signal, thereby
getting your more favorable performance vs. SNR while still minimizing
bandwidth.

If you are not continuous phase, you need more bandwidth. Noise is
proportional to the square root of the bandwidth, so I can't see phase
coherence being beneficial in performance. I suppose if you went GMSK
and reduced the bandwidth, then your point is valid.
MSK demodulates like PSK. Reportedly, you can demodulate continuous-
phase Bell 202 (I think it's Bell 202: it's the one with a 2:3 ratio
between frequency separation and baud rate) like a mutant PSK, too.

MSK certainly gives you SNR advantages, and with some of the bandwidth
advantages of GMSK (which is, after all, a modified form of MSK).

--
Tim Wescott
Control system and signal processing consulting
www.wescottdesign.com
 
On 1/5/2013 4:52 PM, Dave Platt wrote:
I've looked for info on this and found:

http://www.analog.com/library/analogDialogue/archives/44-11/phase_coherent.html

The article states:

Phase-coherence is preferred because the phase is remembered.
Phase-continuous is lower bandwidth.

They do not say why phase-coherence is preferred.

If one is doing FSK signaling, why is phase-coherence preferred?

My guess is that some FSK detectors may operate by locking a pair of
PLLs to the two tones in the signal, with the PLLs having a narrow
capture range and a low rolloff in their loop filter. If the
transmission is phase-coherent, then each PLL would be "right on
target" after a state change in the input signal and would not have to
drift around by a variable amount and re-lock to a different signal
phase.

I'm not sure whether this would be an issue for demodulators using
quadrature detection.
Excellent point! I had not considered the use to two PLLs. Maybe at high
data rates that would be important.

Thanks.
 
On 1/5/2013 4:52 PM, Dave Platt wrote:
I've looked for info on this and found:

http://www.analog.com/library/analogDialogue/archives/44-11/phase_coherent.html

The article states:

Phase-coherence is preferred because the phase is remembered.
Phase-continuous is lower bandwidth.

They do not say why phase-coherence is preferred.

If one is doing FSK signaling, why is phase-coherence preferred?

My guess is that some FSK detectors may operate by locking a pair of
PLLs to the two tones in the signal, with the PLLs having a narrow
capture range and a low rolloff in their loop filter. If the
transmission is phase-coherent, then each PLL would be "right on
target" after a state change in the input signal and would not have to
drift around by a variable amount and re-lock to a different signal
phase.
Yes, but how do you distinguish a frequency shift using two slow PLLs?
 
On Sun, 06 Jan 2013 11:44:31 -0600, John S <Sophi.2@invalid.org> wrote:

On 1/5/2013 4:52 PM, Dave Platt wrote:
I've looked for info on this and found:

http://www.analog.com/library/analogDialogue/archives/44-11/phase_coherent.html

The article states:

Phase-coherence is preferred because the phase is remembered.
Phase-continuous is lower bandwidth.

They do not say why phase-coherence is preferred.

If one is doing FSK signaling, why is phase-coherence preferred?

My guess is that some FSK detectors may operate by locking a pair of
PLLs to the two tones in the signal, with the PLLs having a narrow
capture range and a low rolloff in their loop filter. If the
transmission is phase-coherent, then each PLL would be "right on
target" after a state change in the input signal and would not have to
drift around by a variable amount and re-lock to a different signal
phase.

Yes, but how do you distinguish a frequency shift using two slow PLLs?
Very slowly, methodically, and carefully.

Sub hertz sample rates. ;-) Hehehehehe!
 
Dave Platt wrote:
I've looked for info on this and found:

http://www.analog.com/library/analogDialogue/archives/44-11/phase_coherent.html

The article states:

Phase-coherence is preferred because the phase is remembered.
Phase-continuous is lower bandwidth.

They do not say why phase-coherence is preferred.

If one is doing FSK signaling, why is phase-coherence preferred?

My guess is that some FSK detectors may operate by locking a pair of
PLLs to the two tones in the signal, with the PLLs having a narrow
capture range and a low rolloff in their loop filter. If the
transmission is phase-coherent, then each PLL would be "right on
target" after a state change in the input signal and would not have to
drift around by a variable amount and re-lock to a different signal
phase.

I'm not sure whether this would be an issue for demodulators using
quadrature detection.
Using a PLL to make a matched filter seems iffy. Couple
of resonant filters and a comparator sounds better to my ear. 'Course,
there's a lot of dance-management too...

--
Les Cargill
 
In article <kccd5v$odu$1@dont-email.me>, John S <Sophi.2@invalid.org> wrote:

My guess is that some FSK detectors may operate by locking a pair of
PLLs to the two tones in the signal, with the PLLs having a narrow
capture range and a low rolloff in their loop filter. If the
transmission is phase-coherent, then each PLL would be "right on
target" after a state change in the input signal and would not have to
drift around by a variable amount and re-lock to a different signal
phase.

Yes, but how do you distinguish a frequency shift using two slow PLLs?
Coherent demodulation.

If you've got two PLLs, each one of which has locked to one of the two
tones in both frequency and phase (e.g. during a training sequence),
then you can simply multiply the incoming signal by each of the two
PLL outputs, and low-pass-filter the results.

When the input is receiving the "low" tone, the input and low-tone PLL
will be in-phase sinusoids, the product of the multiplication will
always be non-negative, and the low-pass-filtered version will be
positive. The input and high-tone PLL will be sinusoids of different
frequencies, of opposite polarity half of the time, the product of the
two will average out to zero and the filtered product will be close to
zero most of the time. Run the two filtered products into a
comparator, and the output will be the original data signal (prior to
the FSK modulation).

You have to set the low-pass filter appropriately, of course, based on
the baud rate of the data signal.

--
Dave Platt <dplatt@radagast.org> AE6EO
Friends of Jade Warrior home page: http://www.radagast.org/jade-warrior
I do _not_ wish to receive unsolicited commercial email, and I will
boycott any company which has the gall to send me such ads!
 
On Mon, 31 Dec 2012 04:49:50 -0500, Rod Pemberton wrote:

However, it's been a while since I
worked in electronics industry.
It certainly shows.

No, a professional motherboard manufacturer *will
not* accept designs from open-source software even if in the
correct file formats.
How could they tell? A Gerber file is a Gerber file is a Gerber file.
There should be nothing in there to identify its origin. Same for Excellon
drill files.

Thirdly, "you" can't get the modern
components.
Nonsense.

So, you wouldn't be able to get one of those manufactured.
Given you can't do those things, you're going to have to buy a
motherboard he's had manufactured and a part set from him. Even if you
could buy the motherboard and parts from him, you couldn't assemble the
board yourself. Doing so requires access to wave-soldering machines for
through-hole components and SMT oven-soldering machines for the SMT
components.
Nonsense. Hand soldering of SMT components is routinely done every day. I
dare say you couldn't spot the difference between one of my hand soldered
assemblies and a reflow oven assembly.

Through-hole components were *designed* to be hand soldered (by
lines of girls, sitting at benches, with feed guns).

Also, small electronic manufacturing firms have to submit
minimum orders well into the thousands before a board manufacturer will
even consider a run of boards.
Rubbish. I get quotes for runs of tens all the time. My usual order for
small assemblies is "as many as will fit on a standard panel".

--
"For a successful technology, reality must take precedence
over public relations, for nature cannot be fooled."
(Richard Feynman)
 
On Sun, 06 Jan 2013 11:53:00 -0800, Fred Abse
<excretatauris@invalid.invalid> wrote:

How could they tell? A Gerber file is a Gerber file is a Gerber file.
There should be nothing in there to identify its origin. Same for Excellon
drill files.
And that, folks, is what standards are for!
 
On Sat, 05 Jan 2013 12:59:33 -0600, John S <Sophi.2@invalid.org> wrote:

I've looked for info on this and found:

http://www.analog.com/library/analogDialogue/archives/44-11/phase_coherent.html

The article states:

Phase-coherence is preferred because the phase is remembered.
Phase-continuous is lower bandwidth.

They do not say why phase-coherence is preferred.

If one is doing FSK signaling, why is phase-coherence preferred?

Thanks.
Formal consistency check failed. Please resubmit.

?-)
 
On Sat, 05 Jan 2013 17:54:27 -0800, miso <miso@sushi.com> wrote:

Could be, or alternatively it could be reduced inter-symbol interference
due to not having a bunch of random-height step transients at every
transition.

Cheers

Phil Hobbs


CPFSK eliminates the jumps, but the author is espousing the virtues of
PCFSK, i.e. just the opposite.
I am fairly sure you disunderstand. Can you (or anybody here) explain the
difference CPFSK and PCFSK clearly(; and the S/N effects)?

?-)
 
On Sun, 6 Jan 2013 11:13:27 -0800, dplatt@radagast.org (Dave Platt) wrote:

In article <kccd5v$odu$1@dont-email.me>, John S <Sophi.2@invalid.org> wrote:

My guess is that some FSK detectors may operate by locking a pair of
PLLs to the two tones in the signal, with the PLLs having a narrow
capture range and a low rolloff in their loop filter. If the
transmission is phase-coherent, then each PLL would be "right on
target" after a state change in the input signal and would not have to
drift around by a variable amount and re-lock to a different signal
phase.

Yes, but how do you distinguish a frequency shift using two slow PLLs?

Coherent demodulation.

If you've got two PLLs, each one of which has locked to one of the two
tones in both frequency and phase (e.g. during a training sequence),
then you can simply multiply the incoming signal by each of the two
PLL outputs, and low-pass-filter the results.

When the input is receiving the "low" tone, the input and low-tone PLL
will be in-phase sinusoids, the product of the multiplication will
always be non-negative, and the low-pass-filtered version will be
positive. The input and high-tone PLL will be sinusoids of different
frequencies, of opposite polarity half of the time, the product of the
two will average out to zero and the filtered product will be close to
zero most of the time. Run the two filtered products into a
comparator, and the output will be the original data signal (prior to
the FSK modulation).

You have to set the low-pass filter appropriately, of course, based on
the baud rate of the data signal.
Just for grins can you set up a simulation to play with? I don't quite
follow yet.

?-)
 
mj wrote:
We have a 5V buck regulator in our design and now there is a need for another rail of approx 5.3V (unregulated is fine). Due to space and cost we cannot add another regulator for 5.3V or us dual regulators. So I added a schottky diode D2 as shown in the schematic below and taken the feedback to the regulator after D2. The 2 feedback resistors R2 and R3 values are unchanged from the existing design of 5V. If 5V is maintained after D2 there will be approx 5.3V before D2 (due to diode drop). Does this work? Are there any shortcomings with this approach?

https://picasaweb.google.com/lh/photo/CJfm2M-ti4gbVdSb-n-IK4RE5T5Ar7rVu6J7QkBUCTM?feat=directlink

Thanks in advance.
-mj
Different propagation times for up and down transients? Just a guess,
really.
 

Welcome to EDABoard.com

Sponsor

Back
Top