getting 16 bit from 8 bit ADC in software

B

bazhob

Guest
Hello there,

Unfortunately I'm not too much into electronics, but for a
project in Computer Science I need to measure the output
from an Opamp/integrator circuit with a precision of 16-bits
using a standard 8-bit ADC. The integrator is reset every
500 ms, within that interval we have a slope rising from 0V
up to a maximum of about 5V, depending on the input signal.

The paper of the original experiment (which I'm trying to replicate)
contains the following note: "A MC68HC11A0 micro-controller operated
this reset signal [...] and performed 8-bit A/D conversion on the
integrator output. A signal accuracy of 16 bits in the integrator
reading was obtained by summing (in software) the result of integration
over 256 sub-intervals."

Can someone please point me to more information about this
technique of doubling accuracy in software by dividing the
measured interval by 256? For a start, what would be the
technical term for that?

Thanks a lot!
Toby
 
bazhob wrote:
"A signal accuracy of 16 bits in the integrator
reading was obtained by summing (in software) the result of integration
over 256 sub-intervals."
This can, at most, increase signal accuracy by a factor of 16
(four bits), being the square root of 256. And that's assuming
a number of things about the behaviour of the converter.

To increase accuracy to 16 bits, you need to take 256*256 samples
at least, based on random distribution of quantisation errors.

Clifford Heath.
 
Hello Toby,

....... A signal accuracy of 16 bits in the integrator
reading was obtained by summing (in software) the result of integration
over 256 sub-intervals."
Have the authors validated that claim with some hardcore measurements,
such as detecting and restoring a signal that was, say, 13 or 14 dB
below 5Vpp?

Regards, Joerg

http://www.analogconsultants.com
 
Clifford Heath wrote:

bazhob wrote:

"A signal accuracy of 16 bits in the integrator
reading was obtained by summing (in software) the result of integration
over 256 sub-intervals."


This can, at most, increase signal accuracy by a factor of 16
(four bits), being the square root of 256. And that's assuming
a number of things about the behaviour of the converter.

To increase accuracy to 16 bits, you need to take 256*256 samples
at least, based on random distribution of quantisation errors.

Clifford Heath.
It's not just quantization, though -- there should be timing information
available from the slope; merely averaging isn't all there is to it.

--

Tim Wescott
Wescott Design Services
http://www.wescottdesign.com
 
On 26 Jan 2005 15:31:01 -0800, "bazhob" <listen@fomalhaut.de> wrote:

Can someone please point me to more information about this
technique of doubling accuracy in software by dividing the
measured interval by 256? For a start, what would be the
technical term for that?
Aside from the other comments, you might look at the technique described for the
Burr Brown DDC101, I believe. It might add another thing to consider.

Jon
 
Subject: Re: getting 16 bit from 8 bit ADC in software
From: mike spamme0@netscape.net
Date: 27/01/2005 03:59 GMT Standard Time
Message-id: <41F8672E.5060002@netscape.net


this reset signal [...] and performed 8-bit A/D conversion on the
integrator output. A signal accuracy of 16 bits in the integrator
reading was obtained by summing (in software) the result of integration
over 256 sub-intervals."
Its not possible the way you described it.

If the signal is stable and the A/D is stable, you should get the SAME
reading every time??
Exactly, you would need to add a small signal, 1lsb sawtooth, and sample over
the sawttoth period. Getting a 16 bit absolute reading is very very difficult,
just about impossible, from a simple setup like this even if you had a real
16bit adc.
 
mike wrote:
Tim Wescott wrote:
bazhob wrote:
<big snip>

I thought I understood until I read this.
If the signal is stable and the A/D is stable, you should get the
SAME
reading every time??? To get improved ACCURACY or RESOLUTION, don't
you
first need a system that's stable to much better than the
quantization
interval then to perturb the input with a signal of known statistics?
The usual name for the perturbing signal is "dither". There is a fair
amount of literature on the subject if you can find it. My "Comment on
'Noise averaging and measurement resolution" in Review of Scientific
Instruments, volume 70, page 4734 (1999) lists a bunch of papers on the
subject.

If you're counting on system instability or (uncontrolled) noise to
do
the deed, you're just collecting garbage data. yes? no?
IIRR random noise isn't too bad as a dither source - there is an ideal
distribution, but Gaussian noise is pretty close.

My own experience of getting 16-bit accuracy out of a quasi-quad-slope
integrator-based ADC was that it took a while, but I had to find out
about "charge soak" in the integrating capacitor the hard way, and also
started out relying on the CMOS protection diodes to catch signals
spiking outside the power rails.

Switching to a polypropylene capacitor and discrete Schottky catching
diodes solved both those problems. A colleague of mine once built a
20-bit system based on a similar integrator, using Teflon(PTFE)
capacitors and a feedback system that minimised the voltage excursions
across the integrating capacitor in a much more intricate and expensive
unit.

------
Bill Sloman, Nijmegen
 
Guy Macon wrote:

Tim Wescott wrote:

The rest is weird science.

You make it sound like that's a bad thing.
He is apparently not selling it.

Rene
 
bazhob wrote:

Unfortunately I'm not too much into electronics, but for a
project in 'Computer Science'
The blind leading the blind, I think it is called.

I need to measure the output from an Opamp/integrator
circuit with a precision of 16-bits using a standard 8-bit ADC.
'Precision' - meaning what? 1/2 bit of noise in the reading?

and performed 8-bit A/D conversion on the integrator output.
A signal accuracy of 16 bits in the integrator reading was
obtained by summing (in software) the result of integration
over 256 sub-intervals."
If this was written by someone at your school you may want to transfer
schools.

It is possible to oversample with a V/F or other integrating
A/D to increase _resolution_: accuracy is out the window.

To pick up one usable bit of resolution you will have to
increase the measuring interval by 4. Homework: why?
Hint: the readings have to be independent.

To get from 8 bits to 16 bits will require a 4^8 integration
period increase, or 65,564 readings.

This is a very profound area of inquiry. Talk to someone
in signal processing in the EE department.

--
Nicholas O. Lindan, Cleveland, Ohio
Consulting Engineer: Electronics; Informatics; Photonics.
To reply, remove spaces: n o lindan at ix . netcom . com
psst.. want to buy an f-stop timer? nolindan.com/da/fstop/
 
"CBarn24050" <cbarn24050@aol.com> wrote

Exactly, you would need to add a small signal, 1lsb sawtooth, and sample over
the sawttoth period.
This happens automatically if you use an asynchronous V/F converter: If there
are, say, 4.5 V/F periods in the sampling interval then 1/2 the time the reading
is 4 and half the time it is 5. Since the reading is noise based you have to
measure 4x to drop the noise by 2x and pick up an extra bit [of resolution].

--
Nicholas O. Lindan, Cleveland, Ohio
Consulting Engineer: Electronics; Informatics; Photonics.
To reply, remove spaces: n o lindan at ix . netcom . com
psst.. want to buy an f-stop timer? nolindan.com/da/fstop/
 
Guy Macon wrote:

Tim Wescott wrote:


The rest is weird science.


You make it sound like that's a bad thing.

:)

No, weird science is fun and sometimes lucrative.

--

Tim Wescott
Wescott Design Services
http://www.wescottdesign.com
 

Welcome to EDABoard.com

Sponsor

Back
Top