OT: Best Free Email (POP Only)

On Thu, 13 Nov 2014 13:34:35 -0800, dplatt@coop.radagast.org (David
Platt) Gave us:

Yup. Even mass-manufactured ("pressed") CDs and DVDs can become
unreadable after storage, typically due to oxidation of the reflective
layer. Look up "bronzing" syndrome... I had several hard-to-replace
audio CDs go bronze and fail, some years ago.

LaserDisc format had a host of optical disc maladies. Back then,
"multi-layer" meant "double sided", and that was achieved by laminating
two pressed and mirrored discs together.

They had "disc rot", which crept in from the edges, IIRC, and was an
oxidation effect of air getting between the laminated layers, where the
raw mirroring was then exposed to oxygen.

I haven't examined my collection in years, but I would use it as a
milestone against the age of all my bought, pressed optical discs.

I do not use burned discs for anything other than music archiving.

All my photos, etc can reside on one or more of the MANY robust hard
drives I have around here.

Maybe they burn glass masters slower now, so that the pits get burned
in better, making pressings achieve a better, longer lasting contrast
ratio.

If optical discs were actually mastered with "cylinders" and hard
sector flags, instead of a big worm, perhaps the laser could "refresh" a
burned set of bits it catches a bad read from.
 
On Thu, 13 Nov 2014 13:34:35 -0800, dplatt@coop.radagast.org (David
Platt) Gave us:

Diversity in redundancy is a good strategy!


I have made money in the diversity receiver/antenna industry.
 
On 11/13/2014 4:34 PM, David Platt wrote:
In article <m437n0$g77$2@dont-email.me>, rickman <gnuarm@gmail.com> wrote:

But DVD's and CD's don't die arbitrarily like USB drives... I had one
die on me last month. Fortunately it was just stuff of which I had
original copy.

Sure they can. I have some CDs that are no longer readable.

Yup. Even mass-manufactured ("pressed") CDs and DVDs can become
unreadable after storage, typically due to oxidation of the reflective
layer. Look up "bronzing" syndrome... I had several hard-to-replace
audio CDs go bronze and fail, some years ago.

CD-R and DVD+-R discs can also deteriorate over time. I prefer to buy
blanks from higher-end manufacturers (I like Taiyo Yuden / JVC) rather
than the cut-rate outfits, as the burn success rate and survivability
seem to be better.

If it is
important and longer term backup, I always use multiple CDs or DVDs.

You might want to look into a neat little program called
"DVDisaster". It adds an additional level of Reed-Solomon error
correction to a CD or DVD ISO image - you can append the error
correction data to the ISO for burning, or store it separately. You
can choose the amount of additional Reed-Solomon redundancy to
add... from just-enough, to ridiculously-large.

Lol! Think about this for a moment. The problem is long term backup
*and recovery*. If we are concerned with time spans that allow a CD to
oxidize, what are the chances I will have a machine that can run the
recovery software?


If one or more sectors of the image become unreadable, you can slurp
in as much of the image as *is* readable, run DVDisaster in recovery
mode, and it will perform the calculations needed to restore the lost
sectors.

*If* I still have a running copy of DVDisaster.


For something like a weekly backup I used to use a single disk, but now
it do it to a 3 TB hard drive since it is unlikely that I will lose both
the backup HD and the computer HD at the same time. But for the *very*
important stuff I use off site storage...

Diversity in redundancy is a good strategy!

Yes, as long as you actually do it. lol

--

Rick
 
On 11/10/2014 3:06 PM, Tom Miller wrote:
"Jim Thompson" <To-Email-Use-The-Envelope-Icon@On-My-Web-Site.com> wrote
in message news:c5t16apc491t0vt5d6pe43hjdur6di2v4g@4ax.com...
On Mon, 10 Nov 2014 09:02:07 -0800, John Larkin
jlarkin@highlandtechnology.com> wrote:

On Mon, 10 Nov 2014 08:36:36 -0700, Jim Thompson
To-Email-Use-The-Envelope-Icon@On-My-Web-Site.com> wrote:

On Mon, 10 Nov 2014 09:27:25 -0500, Phil Hobbs
hobbs@electrooptical.net> wrote:

On 11/10/2014 7:53 AM, David Brown wrote:
[snip]

We agree that setting up email isn't very difficult, but we obviously
have very different philosophies of backing up stuff. As far as I'm
concerned, if it isn't backed up on discs that I can touch myself, it
isn't backed up. Server-side backup is a nice-to-have, for sure, but
Google can turn the lights out on Gmail any time it wants. I have
backups of my mail from two or three mail servers that no longer exist
(including some from VM mainframe days).

Cheers

Phil Hobbs

Back-up... I have everything on my local hard-drive _plus_
additionally written to CD's or DVD's.

Terabyte USB hard drives are under $100 nowadays. I back up my mail
daily, rotating backups, and monthly propagate that to my offsite PCs.

DVDs are so last century.

But DVD's and CD's don't die arbitrarily like USB drives... I had one
die on me last month. Fortunately it was just stuff of which I had
original copy.

...Jim Thompson
--
As long as they don't age like 8 inch floppy disk. In 25 years, will you
be able to read them on your current hardware or will you need to keep
legacy devices around?

In 25 years I'll be dead.

--

Rick
 
On 11/13/2014 1:05 PM, John Larkin wrote:
> On Wed, 12 Nov 2014 20:41:31 -0700, Don Y <this@is.not.me.com> wrote:

[elided]

On my desktop PCs (ProLiants with hot-plug RAID) I occasionally pull
out a drive for one reason or another, and poke in a replacement. It
takes about 1.5 hours to resync them.

What level RAID? What size volume? Does it scrub? Or, are you left
to "discover" your losses down the road (when you decide you *need*
a particular file)?

I think it's raid1, namely the same data on two drives. 62G drives for
C:, which is the OS and my apps. I have other drives, local and USB
and network, for the big stuff.

Wait until you are dealing with TB+ drives and *have* to do a rebuild
(not *choose* to do one but, instead, find yourself with a failed array).
You will sweat *bullets* for hours!

I think you will also discover that your mirror'ed arrangement is
just false security. Many implementations only "check" files as
you access them (so, any files that you don't access regularly,
can develop problems and you won't know about those problems *or*
that they are harbingers of more to come!). Some implementations
only grab the accessed file off of *one* volume and don't even
look at the second volume unless the first throws a read error.
So, the second volume (your "backup"!) could be trash before you
ever need it!

You really want an array that scrubs continuously to give you some
assurance that your data *is* still recoverable. Of course, this
often comes at some performance cost (not much for a workstation
but a fair bit for a file server that sees traffic) as the array
is CONSTANTLY being read and checked and (silently) rewritten as
errors are encountered (i.e., BEFORE you would have stumbled on
them).

You should also see if you can "move" an array (or portions of it)
to another "identical" machine. E.g., anticipate how you will
deal with a general hardware failure (bad motherboard, etc.).

I do that now and then, clone my work PC to the identical machines at
home and in the cabin. It always works.

The weird thing is that a drive is always a slot1 drive or a slot2

Exactly. And, you will find that moving either/both of them to a different
make/model machine will leave you with *nothing* (except an offer to
INITIALIZE the drives!).

drive. I can pull a drive from my work machine, either one, and plug
in a replacement, and the drives sync. The pulled drive can be used to
boot another machine, but only in the corresponding slot. Once a
cloned machine boots, I hot-plug a drive into the empty slot and, in
an hour or so, I have two identical drives in that one, too.

You want to make sure you keep two of each such machine and hope they
don't both die at the same time.

I have 6 RAID appliances -- all different. If one of them craps out,
it's "manual recovery mode" for me! I cant just pull the drives from
the failed NAS box and install them in another (different make/model)
and HOPE to be able to access the data.

For this reason, I have been moving my archives onto "home grown"
storage arrays so I can ensure the disks are "accessible" without
relying on a (proprietary) RAID implementation. This is a much
lower performance approach -- but, if one of the boxes dies, I
can plug the drives into any of the other machines, here, and
access them directly from there.

All of these are reasons why you *really* want to rehearse the various
types of failures that you are likely to encounter on each machine
(e.g., motherboard goes flakey and starts writing crap on BOTH drives;
software bug corrupts a volume; etc.)

There is no worse feeling than watching an archive crash! And, wondering
if your precautions against this scenario are adequate (and that you
remember them! Each RAID implementation I've seen has its own wonky
user interface... and, there are no "do overs" there!).

Years ago, I kept my (40GB) archive on 4G external SCSI drives (when they
were $1000/each) that were kept "off-line" (on a shelf in a closet). I
mounted one of them, one day, to pull something off of it. And, the
drive was unreadable!

"Yikes!"

OK, I've anticipated this -- each of the drives had a cloned copy
available (like your RAID mirror -- except the drives are unplugged/off
most of the time).

Installed the backup drive -- and *it* was unreadable!!

"WTF???"

Turns out, the OS had a bug in the disk driver that would scramble
certain make/model drives. I was lucky enough to have a dozen of
them! :<

On another machine (different OS), I began feeding MO disks to rebuild
a copy of the archive. Then, installed two fresh copies on the two
"failed" drive. Of course, I held my breath as each MO disk was read
wondering if I would encounter an unrecoverable read error (and have
to resort to the *tape* copy of the archive)

We've used floppies, tape, hard drive cartriges, CDs, DVDs, and USB
sticks for backup. I don't think we've ever lost anything important.

Just make lots of copies.

Yup. I keep two copies on "high throughput" media (enterprise grade
magnetic disk); a copy on commodity media (CD/DVD); some things
on tape (DLT/SDLT/Ultrium) and others on MO. *NOTHING* "on-line"
unless its being accessed!

But, like good gardens, they need to be "tended" regularly!

Also, thinking hard about what's *really* important to save is
time well spent. In the past year, I've been purging my archives
of everything "client related" (they were warned... if they haven't
kept a copy or requested mine... <shrug> let them find a service
if they don't want to handle it themselves!)

I never throw anything away! Not even old D-size vellum drawings.

I moved all of my drawings onto electronic media many years ago.
Map/drawing file just takes up too much space in the house!
Having *done* that, SWMBO later decided she needed one to store
her artwork... <frown> So, no net gain in free space, here.
(and, she had to have the super huge "Arch E"-size file... TWO
of them! :-/ *AND*, somehow has managed to *fill* them!!)

I've "officially" ended support for those folks so why bother holding
onto stuff for them? (it can also be regarded as a *liability* on
my part as I'd still have to safeguard their trade secrets, etc.)

Of course, that makes it a virtual CERTAINTY that I'll receive a
call from someone desperately looking for something!! <shrug>
 
On 11/10/2014 1:06 PM, Tom Miller wrote:

DVDs are so last century.

But DVD's and CD's don't die arbitrarily like USB drives... I had one
die on me last month. Fortunately it was just stuff of which I had
original copy.

As long as they don't age like 8 inch floppy disk. In 25 years, will you be
able to read them on your current hardware or will you need to keep legacy
devices around?

Of course! (though I can no longer read the hard-sectored ones)
Often, its just easier to hang onto a legacy system than it is
to pull the data (and executables) off the media and find a
"compatible" means of accessing them.

Though I have been on a "binge" to get rid of all these different
tape media types that have been "forced" on me over the years.
Especially the "cheap" ones (I have no idea why folks are so
stingy -- cost conscious -- on things like that! Jeez, the data
is worth thousands of times more than the media+drive! Why try to
save a couple of dollars on something cheap/flimsy/dubious?)
 
In article <m43b6o$shh$3@dont-email.me>, rickman <gnuarm@gmail.com> wrote:

Lol! Think about this for a moment. The problem is long term backup
*and recovery*. If we are concerned with time spans that allow a CD to
oxidize, what are the chances I will have a machine that can run the
recovery software?

It's open-source, and written in C, and in its basic form it runs from
the command line. No GUI fancies.

If one or more sectors of the image become unreadable, you can slurp
in as much of the image as *is* readable, run DVDisaster in recovery
mode, and it will perform the calculations needed to restore the lost
sectors.

*If* I still have a running copy of DVDisaster.

I wouldn't expect that an executable, built today, would be useful in
25 years.

The source code, and the underlying math, though... those ought to be
portable, and not difficult to port.

I'd be more concerned about DVD *drives* ceasing to be available.
Finding (e.g.) a 5.25" floppy drive these days isn't easy... even 3.5"
drives are harder to come by.

Diversity in redundancy is a good strategy!

Yes, as long as you actually do it. lol

Yup. The inevitable "oh, damned" moment tends to occur at just those
times when you haven't done what you knew you should. At least,
that's what happens to me.

It's like they say about guys injured while working with power tools.
The commonest remark, in the emergency room, seems to be "Yeah, I knew
I really shouldn't be doing that."
 
On 11/13/2014 6:50 PM, David Platt wrote:
In article <m43b6o$shh$3@dont-email.me>, rickman <gnuarm@gmail.com> wrote:

Lol! Think about this for a moment. The problem is long term backup
*and recovery*. If we are concerned with time spans that allow a CD to
oxidize, what are the chances I will have a machine that can run the
recovery software?

It's open-source, and written in C, and in its basic form it runs from
the command line. No GUI fancies.

If one or more sectors of the image become unreadable, you can slurp
in as much of the image as *is* readable, run DVDisaster in recovery
mode, and it will perform the calculations needed to restore the lost
sectors.

*If* I still have a running copy of DVDisaster.

I wouldn't expect that an executable, built today, would be useful in
25 years.

If you don't have the executable, how do you recover your data?


The source code, and the underlying math, though... those ought to be
portable, and not difficult to port.

Oh, you write your own code... lol


I'd be more concerned about DVD *drives* ceasing to be available.
Finding (e.g.) a 5.25" floppy drive these days isn't easy... even 3.5"
drives are harder to come by.

I believe floppies have been around for 25 years. Over 30 years actually.


Diversity in redundancy is a good strategy!

Yes, as long as you actually do it. lol

Yup. The inevitable "oh, damned" moment tends to occur at just those
times when you haven't done what you knew you should. At least,
that's what happens to me.

It's like they say about guys injured while working with power tools.
The commonest remark, in the emergency room, seems to be "Yeah, I knew
I really shouldn't be doing that."

And then there is the recovery process that is seldom tested only to
find it doesn't work as expected.

--

Rick
 
On Thu, 13 Nov 2014 16:14:18 -0700, Don Y <this@is.not.me.com> wrote:

On 11/13/2014 1:05 PM, John Larkin wrote:
On Wed, 12 Nov 2014 20:41:31 -0700, Don Y <this@is.not.me.com> wrote:

[elided]

On my desktop PCs (ProLiants with hot-plug RAID) I occasionally pull
out a drive for one reason or another, and poke in a replacement. It
takes about 1.5 hours to resync them.

What level RAID? What size volume? Does it scrub? Or, are you left
to "discover" your losses down the road (when you decide you *need*
a particular file)?

I think it's raid1, namely the same data on two drives. 62G drives for
C:, which is the OS and my apps. I have other drives, local and USB
and network, for the big stuff.

Wait until you are dealing with TB+ drives and *have* to do a rebuild
(not *choose* to do one but, instead, find yourself with a failed array).
You will sweat *bullets* for hours!

I think you will also discover that your mirror'ed arrangement is
just false security. Many implementations only "check" files as
you access them (so, any files that you don't access regularly,
can develop problems and you won't know about those problems *or*
that they are harbingers of more to come!). Some implementations
only grab the accessed file off of *one* volume and don't even
look at the second volume unless the first throws a read error.
So, the second volume (your "backup"!) could be trash before you
ever need it!

Haven't had any problems so far. I have rotated drives in and out of
both RAID slots, and dropped off security copies, saved in baggies, as
fallbacks. Cloning to other machines has always worked.




--

John Larkin Highland Technology, Inc
picosecond timing precision measurement

jlarkin att highlandtechnology dott com
http://www.highlandtechnology.com
 
On 11/14/2014 11:33 AM, John Larkin wrote:
On Thu, 13 Nov 2014 16:14:18 -0700, Don Y <this@is.not.me.com> wrote:

On 11/13/2014 1:05 PM, John Larkin wrote:
On Wed, 12 Nov 2014 20:41:31 -0700, Don Y <this@is.not.me.com> wrote:

[elided]

On my desktop PCs (ProLiants with hot-plug RAID) I occasionally pull
out a drive for one reason or another, and poke in a replacement. It
takes about 1.5 hours to resync them.

What level RAID? What size volume? Does it scrub? Or, are you left
to "discover" your losses down the road (when you decide you *need*
a particular file)?

I think it's raid1, namely the same data on two drives. 62G drives for
C:, which is the OS and my apps. I have other drives, local and USB
and network, for the big stuff.

Wait until you are dealing with TB+ drives and *have* to do a rebuild
(not *choose* to do one but, instead, find yourself with a failed array).
You will sweat *bullets* for hours!

I think you will also discover that your mirror'ed arrangement is
just false security. Many implementations only "check" files as
you access them (so, any files that you don't access regularly,
can develop problems and you won't know about those problems *or*
that they are harbingers of more to come!). Some implementations
only grab the accessed file off of *one* volume and don't even
look at the second volume unless the first throws a read error.
So, the second volume (your "backup"!) could be trash before you
ever need it!

Haven't had any problems so far. I have rotated drives in and out of
both RAID slots, and dropped off security copies, saved in baggies, as
fallbacks. Cloning to other machines has always worked.

Chances are, you will "discover" this problem when it is in a position
to bite you most mercilessly. Unless your array is know to scrub (or
is deliberately TOLD to scrub) periodically, you have no way of
assuring yourself that the data on one volume (in a mirror) agrees with
the data on the other(s). Or, that all the content of a multivolume
(striped, RAID5, etc.) array is consistent and intact.

Sort of like never realizing the batteries in the flashlight are DEAD
until you *need* the flashlight!

[For non-trivial array sizes, these operations are VERY time-consuming;
consider how long it takes to read three TB+ drives assuming access to
them can be given, exclusively, to the entity that is validating their
contents. You're often talking many, many hours (days)!]

For anything other than enterprise-class devices, chances are, these
sorts of activities aren't happening automatically (in the RAID subsystem).
For consumer *appliances*, well... <shrug>

What's worse is the vulnerability you experience when the hardware that
supported the RAID becomes dubious (or, outright fails). Firmware
problems in the controllers, firmware incompatibilities in the drives
*you* may have chosen ("unqualified for this application"), hardware
faults (power supply, solder failures, etc.), etc.

Or, you've pulled the drives thinking you are going to just slap
them into your "new machine"... only to discover that the new machine
doesn't recognize them as a live array -- and promptly offers to
initialize them for you!

[I pulled four 250G drive sleds out of a machine thinking I could slip
them into the "new", almost identical machine. Then, "recycled" the
old machine -- only to discover that I had to run back and "recover"
it in order to pull the data off the drives! At 100Mb network speeds
(the only way off of the machine since the drives couldn't be "ported"),
it was painfully slow work.]

Years ago, there was no such thing as interoperability among RAID vendors
and implementations. *Your* disks were essentially *bound* to *your*
machine! Another model RAID controller by the same vendor (or, gasp, a
different vendor) would be a crap shoot as to whether it would do anything
other than see your disks (containing all your precious data) as anything
more than "bulk (*blank*!) storage media".

DDF now makes this less of a problem -- but, you're wise to VERIFY that
it actually *works* and isn't just an marketing "check-off item"! And,
probably set up a stand-alone box that folks are trained to use to
recover "damaged" arrays/drives -- or a service bureau that you are
comfortable calling on "in a hurry". There's always a temptation to
"try something"... but, with media, there are no Mulligan's! :<
 
On Fri, 14 Nov 2014 14:00:26 -0700, Don Y <this@is.not.me.com> wrote:

On 11/14/2014 11:33 AM, John Larkin wrote:
On Thu, 13 Nov 2014 16:14:18 -0700, Don Y <this@is.not.me.com> wrote:

On 11/13/2014 1:05 PM, John Larkin wrote:
On Wed, 12 Nov 2014 20:41:31 -0700, Don Y <this@is.not.me.com> wrote:

[elided]

On my desktop PCs (ProLiants with hot-plug RAID) I occasionally pull
out a drive for one reason or another, and poke in a replacement. It
takes about 1.5 hours to resync them.

What level RAID? What size volume? Does it scrub? Or, are you left
to "discover" your losses down the road (when you decide you *need*
a particular file)?

I think it's raid1, namely the same data on two drives. 62G drives for
C:, which is the OS and my apps. I have other drives, local and USB
and network, for the big stuff.

Wait until you are dealing with TB+ drives and *have* to do a rebuild
(not *choose* to do one but, instead, find yourself with a failed array).
You will sweat *bullets* for hours!

I think you will also discover that your mirror'ed arrangement is
just false security. Many implementations only "check" files as
you access them (so, any files that you don't access regularly,
can develop problems and you won't know about those problems *or*
that they are harbingers of more to come!). Some implementations
only grab the accessed file off of *one* volume and don't even
look at the second volume unless the first throws a read error.
So, the second volume (your "backup"!) could be trash before you
ever need it!

Haven't had any problems so far. I have rotated drives in and out of
both RAID slots, and dropped off security copies, saved in baggies, as
fallbacks. Cloning to other machines has always worked.

Chances are, you will "discover" this problem when it is in a position
to bite you most mercilessly. Unless your array is know to scrub (or
is deliberately TOLD to scrub) periodically, you have no way of
assuring yourself that the data on one volume (in a mirror) agrees with
the data on the other(s). Or, that all the content of a multivolume
(striped, RAID5, etc.) array is consistent and intact.

Sort of like never realizing the batteries in the flashlight are DEAD
until you *need* the flashlight!

[For non-trivial array sizes, these operations are VERY time-consuming;
consider how long it takes to read three TB+ drives assuming access to
them can be given, exclusively, to the entity that is validating their
contents. You're often talking many, many hours (days)!]

For anything other than enterprise-class devices, chances are, these
sorts of activities aren't happening automatically (in the RAID subsystem).
For consumer *appliances*, well... <shrug

What's worse is the vulnerability you experience when the hardware that
supported the RAID becomes dubious (or, outright fails). Firmware
problems in the controllers, firmware incompatibilities in the drives
*you* may have chosen ("unqualified for this application"), hardware
faults (power supply, solder failures, etc.), etc.

I blew out the RS232 port on my PC once. I went down the hall, grabbed
another identical machine (we have spares), plugged in my drives, and
everything was back up in 10 minutes on the new box. The worst part
was crawling under my desk to mess with the cables.

The HP boxes are pretty good: hot-plug RAID drives, redundant power
supplies, redundant fans, redundant BIOS (!), ECC ram, all that. Not
cheap, but brutally reliable.


--

John Larkin Highland Technology, Inc
picosecond timing precision measurement

jlarkin att highlandtechnology dott com
http://www.highlandtechnology.com
 
On Fri, 14 Nov 2014 17:06:47 -0800, John Larkin
<jlarkin@highlandtechnology.com> Gave us:

On Fri, 14 Nov 2014 14:00:26 -0700, Don Y <this@is.not.me.com> wrote:

On 11/14/2014 11:33 AM, John Larkin wrote:
On Thu, 13 Nov 2014 16:14:18 -0700, Don Y <this@is.not.me.com> wrote:

On 11/13/2014 1:05 PM, John Larkin wrote:
On Wed, 12 Nov 2014 20:41:31 -0700, Don Y <this@is.not.me.com> wrote:

[elided]

On my desktop PCs (ProLiants with hot-plug RAID) I occasionally pull
out a drive for one reason or another, and poke in a replacement. It
takes about 1.5 hours to resync them.

What level RAID? What size volume? Does it scrub? Or, are you left
to "discover" your losses down the road (when you decide you *need*
a particular file)?

I think it's raid1, namely the same data on two drives. 62G drives for
C:, which is the OS and my apps. I have other drives, local and USB
and network, for the big stuff.

Wait until you are dealing with TB+ drives and *have* to do a rebuild
(not *choose* to do one but, instead, find yourself with a failed array).
You will sweat *bullets* for hours!

I think you will also discover that your mirror'ed arrangement is
just false security. Many implementations only "check" files as
you access them (so, any files that you don't access regularly,
can develop problems and you won't know about those problems *or*
that they are harbingers of more to come!). Some implementations
only grab the accessed file off of *one* volume and don't even
look at the second volume unless the first throws a read error.
So, the second volume (your "backup"!) could be trash before you
ever need it!

Haven't had any problems so far. I have rotated drives in and out of
both RAID slots, and dropped off security copies, saved in baggies, as
fallbacks. Cloning to other machines has always worked.

Chances are, you will "discover" this problem when it is in a position
to bite you most mercilessly. Unless your array is know to scrub (or
is deliberately TOLD to scrub) periodically, you have no way of
assuring yourself that the data on one volume (in a mirror) agrees with
the data on the other(s). Or, that all the content of a multivolume
(striped, RAID5, etc.) array is consistent and intact.

Sort of like never realizing the batteries in the flashlight are DEAD
until you *need* the flashlight!

[For non-trivial array sizes, these operations are VERY time-consuming;
consider how long it takes to read three TB+ drives assuming access to
them can be given, exclusively, to the entity that is validating their
contents. You're often talking many, many hours (days)!]

For anything other than enterprise-class devices, chances are, these
sorts of activities aren't happening automatically (in the RAID subsystem).
For consumer *appliances*, well... <shrug

What's worse is the vulnerability you experience when the hardware that
supported the RAID becomes dubious (or, outright fails). Firmware
problems in the controllers, firmware incompatibilities in the drives
*you* may have chosen ("unqualified for this application"), hardware
faults (power supply, solder failures, etc.), etc.

I blew out the RS232 port on my PC once. I went down the hall, grabbed
another identical machine (we have spares), plugged in my drives, and
everything was back up in 10 minutes on the new box. The worst part
was crawling under my desk to mess with the cables.

The HP boxes are pretty good: hot-plug RAID drives, redundant power
supplies, redundant fans, redundant BIOS (!), ECC ram, all that. Not
cheap, but brutally reliable.

You can get G6 dual XEON DL360s with SAS and dual or quad GbE on ebay
all day. Most are mint, some are new, as folks all want the latest and
that is G8 or G9 now.

You should buy a few. I am serious. They are like 1U.

I can remember buying up every HP 3457A we could find, because they
were all selling for like $150 each and they were $1500 meters!

I still have one, because we bought more than we needed in the shop.

NICE meters. NEVER lose cal either (truly).

Those computers would be a similar type investment.

I'd get one if my budget were not so tight right now.
 
On 11/14/2014 6:06 PM, John Larkin wrote:
On Fri, 14 Nov 2014 14:00:26 -0700, Don Y <this@is.not.me.com> wrote:

On 11/14/2014 11:33 AM, John Larkin wrote:
On Thu, 13 Nov 2014 16:14:18 -0700, Don Y <this@is.not.me.com> wrote:

On 11/13/2014 1:05 PM, John Larkin wrote:
On Wed, 12 Nov 2014 20:41:31 -0700, Don Y <this@is.not.me.com> wrote:

[elided]

On my desktop PCs (ProLiants with hot-plug RAID) I occasionally pull
out a drive for one reason or another, and poke in a replacement. It
takes about 1.5 hours to resync them.

What level RAID? What size volume? Does it scrub? Or, are you left
to "discover" your losses down the road (when you decide you *need*
a particular file)?

I think it's raid1, namely the same data on two drives. 62G drives for
C:, which is the OS and my apps. I have other drives, local and USB
and network, for the big stuff.

Wait until you are dealing with TB+ drives and *have* to do a rebuild
(not *choose* to do one but, instead, find yourself with a failed array).
You will sweat *bullets* for hours!

I think you will also discover that your mirror'ed arrangement is
just false security. Many implementations only "check" files as
you access them (so, any files that you don't access regularly,
can develop problems and you won't know about those problems *or*
that they are harbingers of more to come!). Some implementations
only grab the accessed file off of *one* volume and don't even
look at the second volume unless the first throws a read error.
So, the second volume (your "backup"!) could be trash before you
ever need it!

Haven't had any problems so far. I have rotated drives in and out of
both RAID slots, and dropped off security copies, saved in baggies, as
fallbacks. Cloning to other machines has always worked.

Chances are, you will "discover" this problem when it is in a position
to bite you most mercilessly. Unless your array is know to scrub (or
is deliberately TOLD to scrub) periodically, you have no way of
assuring yourself that the data on one volume (in a mirror) agrees with
the data on the other(s). Or, that all the content of a multivolume
(striped, RAID5, etc.) array is consistent and intact.

Sort of like never realizing the batteries in the flashlight are DEAD
until you *need* the flashlight!

[For non-trivial array sizes, these operations are VERY time-consuming;
consider how long it takes to read three TB+ drives assuming access to
them can be given, exclusively, to the entity that is validating their
contents. You're often talking many, many hours (days)!]

For anything other than enterprise-class devices, chances are, these
sorts of activities aren't happening automatically (in the RAID subsystem).
For consumer *appliances*, well... <shrug

What's worse is the vulnerability you experience when the hardware that
supported the RAID becomes dubious (or, outright fails). Firmware
problems in the controllers, firmware incompatibilities in the drives
*you* may have chosen ("unqualified for this application"), hardware
faults (power supply, solder failures, etc.), etc.

I blew out the RS232 port on my PC once. I went down the hall, grabbed
another identical machine (we have spares), plugged in my drives, and
everything was back up in 10 minutes on the new box. The worst part
was crawling under my desk to mess with the cables.

I did that on my first PC (bad isolator in a piece of kit I was prototyping).
As I buy two of everything, I just moved the prototype over to the spare
machine (identical software, etc.) and put the "serial+parallel ISA board"
in a pile to bring to vendor for replacement.

"Gee, I dunno what happened! It just went 'pop'..." :>

The HP boxes are pretty good: hot-plug RAID drives, redundant power
supplies, redundant fans, redundant BIOS (!), ECC ram, all that. Not
cheap, but brutally reliable.

I've given up trying to sort out "best" and "avoid at all costs"
vendors/products.

I volunteered for many years at a facility that recycled electronic
products. My role being to try to recover/repurpose kit to divert
it from landfills, get it into the hands of folks (schools, charities)
that could use it (but couldn't afford to *buy* it), etc.

You'd be just as likely to see a pallet of commodity desktop machines
(figure 1.5 - 3 yr life cycles in business environment) as you would
enterprise *servers*! And, just as likely to find servers with
bad motherboards, power distribution boards (redundant power supplies
don't help you if the mechanism that shares the load fails!), CPUs
toasted from failed fans, etc.

Of course, just because they are bought as servers doesn't mean the
owner is operating them correctly *as* servers! E.g., no idea what
their cold aisles may have been...

The only machines that I *never* seem to find "bad" (i.e., ready to run
on the application of power) are Alpha servers. IBM, HP, Dell, Google,
etc. all seem to have problems (often a common problem for a particular
make/model).

[Of course, buying these sorts of things by the *pound*, as surplus,
all that extra chassis weight is hard to swallow: "Cripes! Can't
we put it in a lighter chassis and buy it for $10 instead of $20?
I sure don't need to be carrying all that 'extra' metal!" :> E.g.,
my BladeCenter is close to 250? pounds and it's just 7U! (14 dual
Xeon's each with dual Gb ethernet and dual 70G SAS drives, quad 2000W
power supplies, two blower modules, etc. -- all hot swappable. OTOH,
I can *stand* on it without fear of collapsing the case...]

The surprising revelation was that SAS/SCSI/FC enterprise drives
tend to have far more usable life than the servers in which they
reside! (not true of consumer/run-of-the-mill PATA/SATA drives
in desktop machines). I had expected the oppposite -- that the
drives run so much hotter -- especially when "up" 24/7/365 as
in a server.

I suspect the drive manufacturers are aware of this and take measures
to prolong their operating lives. Including cooling to that portion
of the chassis.

OTOH, the box makers probably think a couple of redundant fans will keep
the rest of the box "comfortable".
 
On Fri, 14 Nov 2014 17:51:17 -0800 DecadentLinuxUserNumeroUno
<DLU1@DecadentLinuxUser.org> wrote in Message id:
<p2cd6a9gsceq536vm9rcvp96atql5669pf@4ax.com>:

I can remember buying up every HP 3457A we could find, because they
were all selling for like $150 each and they were $1500 meters!

I still have one, because we bought more than we needed in the shop.

NICE meters. NEVER lose cal either (truly).

Until the battery dies... Better check it!
 

Welcome to EDABoard.com

Sponsor

Back
Top