Bargain LTSpice/Lab laptop...

B

bitrex

Guest
On 1/26/2022 6:19 PM, Don Y wrote:
On 1/26/2022 1:04 PM, whit3rd wrote:
Pro photographers,  sound engineering, and the occasional video edit shop
will need one-user big fast disks, but in the modern market, the
smaller and slower
disks ARE big and fast, in absolute terms.

More importantly, they are very reliable.  I come across thousands
(literally)
of scrapped machines (disks) every week.  I\'ve built a gizmo to wipe
them and
test them in the process.  The number of \"bad\" disks is a tiny fraction;
most
of our discards are disks that we deem too small to bother with (250G or
smaller).

As most come out of corporate settings (desktops being consumer-quality
while servers/arrays being enterprise), they tend to have high PoH
figures...
many exceeding 40K (4-5 years at 24/7).  Still, no consequences to data
integrity.

Surely, if these IT departments feared for data on the thousands of
seats they maintain, they would argue for the purchase of mechanisms
to reduce that risk (as the IT department specs the devices, if they
see high failure rates, all of their consumers will bitch about the
choice that has been IMPOSED upon them!)

The oldest drive I still own, a 250 gig 7200 Barracudas, SMART tools
reports has accumulated 64,447 power-on hours. It was still in regular
use up until two years ago.

It comes from a set of four I bought around 2007 I think. Two of them
failed in the meantime and the other two...well I can\'t say I have much
of a use for them at this point really, they\'re pretty slow anyway.
 
B

bitrex

Guest
On 1/26/2022 3:04 PM, whit3rd wrote:
On Tuesday, January 25, 2022 at 12:43:35 PM UTC-8, bitrex wrote:
On 1/25/2022 11:18 AM, David Brown wrote:

First, I recommend you drop the hardware controllers. Unless you are
going for a serious high-end device...

It seems shocking that Linux software RAID could approach the
performance of a late-model cached hardware controller that can spend
it\'s entire existence optimizing the performance of that cache.

Not shocking at all; \'the performance\' that matters is rarely similar to measured
benchmarks. Even seasoned computer users can misunderstand their
needs and multiply their overhead cost needlessly, to get improvement in operation.

Ya, the argument also seems to be it\'s wasteful to keep a couple spare
$50 surplus HW RAID cards sitting around, but I should keep a few spare
PCs sitting around instead.

Ok...

Pro photographers, sound engineering, and the occasional video edit shop
will need one-user big fast disks, but in the modern market, the smaller and slower
disks ARE big and fast, in absolute terms.
 
D

Don Y

Guest
On 1/26/2022 5:51 PM, bitrex wrote:
On 1/26/2022 6:19 PM, Don Y wrote:
On 1/26/2022 1:04 PM, whit3rd wrote:
Pro photographers, sound engineering, and the occasional video edit shop
will need one-user big fast disks, but in the modern market, the smaller and
slower
disks ARE big and fast, in absolute terms.

More importantly, they are very reliable. I come across thousands (literally)
of scrapped machines (disks) every week. I\'ve built a gizmo to wipe them and
test them in the process. The number of \"bad\" disks is a tiny fraction; most
of our discards are disks that we deem too small to bother with (250G or
smaller).

As most come out of corporate settings (desktops being consumer-quality
while servers/arrays being enterprise), they tend to have high PoH figures...
many exceeding 40K (4-5 years at 24/7). Still, no consequences to data
integrity.

Surely, if these IT departments feared for data on the thousands of
seats they maintain, they would argue for the purchase of mechanisms
to reduce that risk (as the IT department specs the devices, if they
see high failure rates, all of their consumers will bitch about the
choice that has been IMPOSED upon them!)

The oldest drive I still own, a 250 gig 7200 Barracudas, SMART tools reports
has accumulated 64,447 power-on hours. It was still in regular use up until two
years ago.

Look to the number of sector remap events to see if the *drive* thinks
it\'s having problems. None of mine report any such events. (but, I
only check on that stat irregularly)

I have a 600 *M*B drive in my Compaq Portable 386 -- and that was tough
to \"fit\" (cuz the BIOS didn\'t support anything that big). And a 340M in
a box in case the 600 dies.

I don\'t recall how the drive size in my Voyager -- but it would also
be small (by today\'s standards).

I see 1TB as a nominal drive size. Anything smaller is just used
offline to store disk images (you can typically image a \"nearly full\"
1TB drive on < 500GB)

I have some 70G SCA 2.5\" drives that I figure might come in handy, some
day. (But, my patience is wearing thin and they may find themselves in
the scrap pile, soon!)

It comes from a set of four I bought around 2007 I think. Two of them failed in
the meantime and the other two...well I can\'t say I have much of a use for them
at this point really, they\'re pretty slow anyway.

Slow is relative. To a 20MHz 386, you\'d be surprised how \"fast\" an old
drive can be! :>
 
D

Don Y

Guest
On 1/26/2022 6:24 PM, bitrex wrote:
On 1/26/2022 3:04 PM, whit3rd wrote:
On Tuesday, January 25, 2022 at 12:43:35 PM UTC-8, bitrex wrote:
On 1/25/2022 11:18 AM, David Brown wrote:

First, I recommend you drop the hardware controllers. Unless you are
going for a serious high-end device...

It seems shocking that Linux software RAID could approach the
performance of a late-model cached hardware controller that can spend
it\'s entire existence optimizing the performance of that cache.

Not shocking at all; \'the performance\' that matters is rarely similar to
measured
benchmarks. Even seasoned computer users can misunderstand their
needs and multiply their overhead cost needlessly, to get improvement in
operation.

Ya, the argument also seems to be it\'s wasteful to keep a couple spare $50
surplus HW RAID cards sitting around, but I should keep a few spare PCs sitting
around instead.

I think the points are that:
- EVERYONE has a spare laptop or desktop -- or *will* have one, RSN!
- a spare machine can be used for different purposes other than the need
for which it was originally purchased
- RAID is of dubious value (I\'ve watched each of my colleagues quietly
abandon it after having this discussion years ago. Of course, there\'s
always some \"excuse\" for doing so -- but, if they really WANTED to
keep it, they surely could! I\'ll even offer my collection of RAID cards
for them to choose a suitable replacement -- BBRAM caches, PATA, SATA,
SCSI, SAS, etc. -- as damn near every server I\'ve had came with
such a card)

Note that the physical size of the machine isn\'t even a factor in how
it is used (think USB and FireWire). I use a tiny *netbook* to maintain
my \"distfiles\" collection: connect it to the internet, plug the
external drive that holds my current distfile collection and run a
script that effectively rsync(8)\'s with public repositories.

My media tank is essentially a diskless workstation with a couple of
USB3 drives hanging off of it.

My DNS/NTP/TFTP/font/RDBMS/etc. server is another such workstation
with a (laptop) disk drive cobbled inside.

The biggest problem is finding inconspicuous places to hide such kit
while being able to access them (to power them up/down, etc.)

Ok...

Pro photographers, sound engineering, and the occasional video edit shop
will need one-user big fast disks, but in the modern market, the smaller and
slower
disks ARE big and fast, in absolute terms.
 
B

bitrex

Guest
On 1/26/2022 8:53 PM, Don Y wrote:
On 1/26/2022 6:24 PM, bitrex wrote:
On 1/26/2022 3:04 PM, whit3rd wrote:
On Tuesday, January 25, 2022 at 12:43:35 PM UTC-8, bitrex wrote:
On 1/25/2022 11:18 AM, David Brown wrote:

First, I recommend you drop the hardware controllers. Unless you are
going for a serious high-end device...

It seems shocking that Linux software RAID could approach the
performance of a late-model cached hardware controller that can spend
it\'s entire existence optimizing the performance of that cache.

Not shocking at all; \'the performance\' that matters is rarely similar
to measured
benchmarks.   Even seasoned computer users can misunderstand their
needs and multiply their overhead cost needlessly, to get
improvement  in operation.

Ya, the argument also seems to be it\'s wasteful to keep a couple spare
$50 surplus HW RAID cards sitting around, but I should keep a few
spare PCs sitting around instead.

I think the points are that:
- EVERYONE has a spare laptop or desktop -- or *will* have one, RSN!
- a spare machine can be used for different purposes other than the need
  for which it was originally purchased
- RAID is of dubious value (I\'ve watched each of my colleagues quietly
  abandon it after having this discussion years ago.  Of course, there\'s
  always some \"excuse\" for doing so -- but, if they really WANTED to
  keep it, they surely could!  I\'ll even offer my collection of RAID cards
  for them to choose a suitable replacement -- BBRAM caches, PATA, SATA,
  SCSI, SAS, etc. -- as damn near every server I\'ve had came with
  such a card)

I don\'t know of any more cost-effective solution on Windows that lets me
have easily-expandable mass storage in quite the same way. And on RAID 0
at least lets me push to the limit of the SATA bandwidth as I\'ve shown
is possible for saving and retrieving giant files, like sound libraries.

A M2 SSD is fantastic but a 4 TB unit is about $500-700 per. With these
HW cards with the onboard BIOS I just pop in more $50 drives if I want
more space and it\'s set up for me automatically and transparently to the
OS, with a few key-presses in the setup screen that it launches into
automagically on boot if you hit control + R.

A 2 M2 SSD unit is only about $170 as Mr. Brown says but I only have one
M2 slot on my current motherboard, 2 PCIe slots, one of those is taken
up by the GPU and you can maybe put one or two more on a PCIe adapter, I
don\'t think it makes much sense to keep anything but the OS drive on the
motherboard\'s M2 slot.

Hey as an aside did I mention how difficult it is to find a decent AMD
micro-ITX motherboard that has two full-width PCIe slots in the first
place? That also doesn\'t compromise access to the other PCIe 1x slot or
each other when you install a GPU that takes up two slots.

You can\'t just put the GPU on any full-width slot either cuz if you read
the fine print it usually says one of them only runs at 4x max if both
slots are occupied, they aren\'t both really 16x if you use them both.

I don\'t think a 4x PCIe slot can support two NVME drives in the first
place. But a typical consumer micro-ITX motherboard still tends to come
with 4 SATA ports which is nice, however if you also read the fine print
it tends to say that if you use the onboard M2 slot at least two of the
SATA ports get knocked out. Not so nice.

I\'ve been burned by using motherboard RAID before I won\'t go back that
way for sure. I don\'t know what Mr. Brown means by \"Intel motherboard
RAID\" I\'ve never had any motherboard whose onboard soft-RAID was
compatible with anything other than that manufacturer. I\'m not nearly as
concerned about what looks to be substantial well-designed Dell PCIe
cards failing as I am about my motherboard failing frankly, consumer
motherboards are shit!!! Next to PSUs motherboards are the most common
failure I\'ve experienced in my lifetime, they aren\'t reliable.

Anyway, point to this rant is that the cost to get an equivalent amount
of the new hotness in storage performance on a Windows desktop built
with consumer parts starts increasing quickly, it\'s not really that
cheap, and not particularly flexible.

Note that the physical size of the machine isn\'t even a factor in how
it is used (think USB and FireWire).  I use a tiny *netbook* to maintain
my \"distfiles\" collection:  connect it to the internet, plug the
external drive that holds my current distfile collection and run a
script that effectively rsync(8)\'s with public repositories.

Y\'all act like file systems are perfect they\'re not, I can find many
horror stories about trying to restore ZFS partitions in Linux, also,
and if it doesn\'t work perfectly the first time it looks like it\'s very
helpful to be proficient with the Linux command line, which I ain\'t.

My media tank is essentially a diskless workstation with a couple of
USB3 drives hanging off of it.

My DNS/NTP/TFTP/font/RDBMS/etc. server is another such workstation
with a (laptop) disk drive cobbled inside.

The biggest problem is finding inconspicuous places to hide such kit
while being able to access them (to power them up/down, etc.)

Right, I don\'t want to be a network administrator.
 
B

bitrex

Guest
On 1/26/2022 8:43 PM, Don Y wrote:
On 1/26/2022 5:51 PM, bitrex wrote:
On 1/26/2022 6:19 PM, Don Y wrote:
On 1/26/2022 1:04 PM, whit3rd wrote:
Pro photographers,  sound engineering, and the occasional video edit
shop
will need one-user big fast disks, but in the modern market, the
smaller and slower
disks ARE big and fast, in absolute terms.

More importantly, they are very reliable.  I come across thousands
(literally)
of scrapped machines (disks) every week.  I\'ve built a gizmo to wipe
them and
test them in the process.  The number of \"bad\" disks is a tiny
fraction; most
of our discards are disks that we deem too small to bother with (250G or
smaller).

As most come out of corporate settings (desktops being consumer-quality
while servers/arrays being enterprise), they tend to have high PoH
figures...
many exceeding 40K (4-5 years at 24/7).  Still, no consequences to data
integrity.

Surely, if these IT departments feared for data on the thousands of
seats they maintain, they would argue for the purchase of mechanisms
to reduce that risk (as the IT department specs the devices, if they
see high failure rates, all of their consumers will bitch about the
choice that has been IMPOSED upon them!)

The oldest drive I still own, a 250 gig 7200 Barracudas, SMART tools
reports has accumulated 64,447 power-on hours. It was still in regular
use up until two years ago.

Look to the number of sector remap events to see if the *drive* thinks
it\'s having problems.  None of mine report any such events.  (but, I
only check on that stat irregularly)

See for yourself, I don\'t know what all of this means:

<https://imgur.com/a/g0EOkNO>

Got the power-on hours wrong before FBE1 = 64481.

SMART still reports this drive as \"Good\"

I have a 600 *M*B drive in my Compaq Portable 386 -- and that was tough
to \"fit\" (cuz the BIOS didn\'t support anything that big).  And a 340M in
a box in case the 600 dies.

I don\'t recall how the drive size in my Voyager -- but it would also
be small (by today\'s standards).

I see 1TB as a nominal drive size.  Anything smaller is just used
offline to store disk images (you can typically image a \"nearly full\"
1TB drive on < 500GB)

I have some 70G SCA 2.5\" drives that I figure might come in handy, some
day.  (But, my patience is wearing thin and they may find themselves in
the scrap pile, soon!)

It comes from a set of four I bought around 2007 I think. Two of them
failed in the meantime and the other two...well I can\'t say I have
much of a use for them at this point really, they\'re pretty slow anyway.

Slow is relative.  To a 20MHz 386, you\'d be surprised how \"fast\" an old
drive can be!  :

Anyone make an ISA to SATA adapter card? Probably.
 
D

Don Y

Guest
On 1/26/2022 8:40 PM, bitrex wrote:
I think the points are that:
- EVERYONE has a spare laptop or desktop -- or *will* have one, RSN!
- a spare machine can be used for different purposes other than the need
for which it was originally purchased
- RAID is of dubious value (I\'ve watched each of my colleagues quietly
abandon it after having this discussion years ago. Of course, there\'s
always some \"excuse\" for doing so -- but, if they really WANTED to
keep it, they surely could! I\'ll even offer my collection of RAID cards
for them to choose a suitable replacement -- BBRAM caches, PATA, SATA,
SCSI, SAS, etc. -- as damn near every server I\'ve had came with
such a card)

I don\'t know of any more cost-effective solution on Windows that lets me have
easily-expandable mass storage in quite the same way. And on RAID 0 at least
lets me push to the limit of the SATA bandwidth as I\'ve shown is possible for
saving and retrieving giant files, like sound libraries.

The easiest way to get more storage is with an external drive.

With USB3, bandwidths are essentially limited by your motherboard
and the drive. (Some USB2 implementations were strangled).

I have *files* that are 50GB (why not?). So, file size isn\'t
an issue.

If you are careful in your choice of filesystem (and file naming
conventions), you can move the medium to another machine hosted
on a different OS.

To make \"moving\" easier, connect the drive (USB or otherwise) to
a small computer with a network interface. Then, export the
drive as an SMB or NFS share, wrap a web interface around it, or
access via FTP/etc.

This is how my archive is built -- so I can access files from
Windows machines, *BSD boxen, SPARCs -- even my ancient 386 portable
(though bandwidth is sorely limited in that last case).

A M2 SSD is fantastic but a 4 TB unit is about $500-700 per. With these HW
cards with the onboard BIOS I just pop in more $50 drives if I want more space
and it\'s set up for me automatically and transparently to the OS, with a few
key-presses in the setup screen that it launches into automagically on boot if
you hit control + R.

But do you really need all that as on-line, \"secondary storage\"?
And, if so, does it really need to be fast?

My documentation preparation workstation has about 800GB of applications
(related to preparing documentation). The other ~4T are libraries and
collections of \"building blocks\" that I use in that process.

E.g., if I want to create an animation showing some guy doing something,
I find a suitable 3D model of a guy that looks kinda like I\'d like him to
look *in* those libraries. Along with any other \"props\" I\'d like in
this fictional world of his.

But, once I\'ve found the models that I want, those drives are just
generating heat; any future accesses will be in my \"playpen\" and,
likely, system RAM.

OTOH, none of the other machines ever need to access that collection
of 3D models. So, there\'s no value in my hosting them on a NAS -- that
would mean the NAS had to be up in order for me to browse its contents:
\"Hmmm... this guy isn\'t working out as well as I\'d hoped. Let me see if
I can find an alternative...\"

(and, as it likely wouldn\'t have JUST stuff for this workstation,
it would likely be larger to accommodate the specific needs of a variety
of workstations).

A 2 M2 SSD unit is only about $170 as Mr. Brown says but I only have one M2
slot on my current motherboard, 2 PCIe slots, one of those is taken up by the
GPU and you can maybe put one or two more on a PCIe adapter, I don\'t think it
makes much sense to keep anything but the OS drive on the motherboard\'s M2 slot.

Hey as an aside did I mention how difficult it is to find a decent AMD
micro-ITX motherboard that has two full-width PCIe slots in the first place?
That also doesn\'t compromise access to the other PCIe 1x slot or each other
when you install a GPU that takes up two slots.

Try finding motherboards that can support half a dozen drives, two dual-slot
GPUs *and* several other slots (for SAS HBA, SCSI HBA, etc.).

You can\'t just put the GPU on any full-width slot either cuz if you read the
fine print it usually says one of them only runs at 4x max if both slots are
occupied, they aren\'t both really 16x if you use them both.

I don\'t think a 4x PCIe slot can support two NVME drives in the first place.
But a typical consumer micro-ITX motherboard still tends to come with 4 SATA
ports which is nice, however if you also read the fine print it tends to say
that if you use the onboard M2 slot at least two of the SATA ports get knocked
out. Not so nice.

I\'ve been burned by using motherboard RAID before I won\'t go back that way for
sure. I don\'t know what Mr. Brown means by \"Intel motherboard RAID\" I\'ve never
had any motherboard whose onboard soft-RAID was compatible with anything other
than that manufacturer. I\'m not nearly as concerned about what looks to be
substantial well-designed Dell PCIe cards failing as I am about my motherboard
failing frankly, consumer motherboards are shit!!!

The downside of any RAID is you are tied to the implementation.
I used to run a 15 slot RAID array. PITA moving volumes, adding
volumes, etc.

Now:
# disklabel -I -e sdX
; edit as appropriate *or* copy from another similarly sized volume
# newfs /dev/rsdXa
; no need for more than one \"partition\" on a drive!
# mount /dev/sdXa /mountpoint
# tar/cp/rcp/rsync/whatever
; copy files onto volume
# updatearchive /mountpoint
; update database of volume\'s contents and their hashes
# umount /mountpoint
; put volume on a shelf until further need

Next to PSUs motherboards
are the most common failure I\'ve experienced in my lifetime, they aren\'t reliable.

I\'ve never lost one. But, all of mine have been Dell & HP boxes.

Anyway, point to this rant is that the cost to get an equivalent amount of the
new hotness in storage performance on a Windows desktop built with consumer
parts starts increasing quickly, it\'s not really that cheap, and not
particularly flexible.

I don\'t see the problem with using external drives (?)
If you\'re stuck with USB2, add a USB3 or Firewire card.

And, if your storage is effectively \"offline\" in terms of
frequency of access, just put the drive on a shelf until
needed. A few (three?) decades ago, I had a short 24U
rack populated with several DEC storage arrays. Eventually,
I discarded the rack (silly boy! :< ) and all but one of the
arrays. I moved all of the drives -- each in its own little
\"module\" -- onto a closet shelf with a label affixed.
When needed, fire up the array (shelf) and insert the drive(s)
of interest, access as appropriate, then shut everything down.

I use a similar approach, today, with five of these:
<http://www.itinstock.com/ekmps/shops/itinstock/images/dell-powervault-md1000-15-bay-drive-storage-array-san-with-15x-300gb-3.5-15k-sas-[2]-47121-p.jpg>
though I\'ve kept the actual arrays (store the drives *in* the array)
as they serve double duty as my prototype \"disk sanitizer\" (wipe
60 drives at a time)

Most times, I don\'t need 15+ drives spinning. So, I pull the sleds for
the drives of interest and install them in a small 4-bay server (the
storage arrays are noisey as hell!) that automatically exports them to
the machines on my LAN.

I have a pair of different arrays configured as a SAN for my ESXi
server so each of it\'s (24) 2T drives holds VMDKs for different
virtual machine \"emulations\". But, those are only needed if
the files I want aren\'t already present on any of the 8 drives
in the ESXi host.

I\'ve got several 8T (consumer) drives placed around one of my PCs
with various contents. Each physical drive has an adhesive label
telling me what it hosts. Find USB cord for drive, plug in wall
wart, plug in USB cable, wait for drive to spin up. Move files
to/from medium. Reverse process to spin it back down. This is
a bit easier than the storage arrays so that\'s where I keep my
more frequently accessed \"off-line\" content.

E.g., I recently went through my MP3 and FLAC libraries
cleaning up filenames, tags, album art, etc. I pulled
all of the content onto a workstation (from one of these
external drives), massaged it as appropriate, then pushed
it back onto the original medium (and updated my database
so the hashes were current... this last step not necessary
for other folks).

*Lots* of ways to give yourself extra storage. Bigger
problem is keeping track of all of it! Like having
a room full of file cabinets and wondering where \"that\"
particular file was placed :<

Note that the physical size of the machine isn\'t even a factor in how
it is used (think USB and FireWire). I use a tiny *netbook* to maintain
my \"distfiles\" collection: connect it to the internet, plug the
external drive that holds my current distfile collection and run a
script that effectively rsync(8)\'s with public repositories.

Y\'all act like file systems are perfect they\'re not, I can find many horror
stories about trying to restore ZFS partitions in Linux, also, and if it
doesn\'t work perfectly the first time it looks like it\'s very helpful to be
proficient with the Linux command line, which I ain\'t.

Why would you be using ZFS? That\'s RAID for masochists. Do those
gold audio cables make a difference in your listening experience?
If not, why bother with them?! What\'s ZFS going to give you -- besides
bragging rights?

Do you have a problem with media *failures*? (not PEBKAC) If the answer
is \"no\", then live with \"simple volumes\". This makes life *so* much
easier as you don\'t have to remember any special procedures to
create new volumes, add volumes, remove volumes, etc.

If I install N drives in a machine and power it up, I will see N
mount points: /0 ... /N. If I don\'t see a volume backing a
particular mount point, then there must be something wrong with that
drive (did I ever bother to format it? does it host some oddball
filesystem? did I fail to fully insert it?)

My media tank is essentially a diskless workstation with a couple of
USB3 drives hanging off of it.

My DNS/NTP/TFTP/font/RDBMS/etc. server is another such workstation
with a (laptop) disk drive cobbled inside.

The biggest problem is finding inconspicuous places to hide such kit
while being able to access them (to power them up/down, etc.)

Right, I don\'t want to be a network administrator.

Anyone who can\'t set up an appliance, nowadays, is a dinosaur.
The same applies to understanding how a \"simple network\" works
and is configured.

I spend no time \"administering\" my network. Every box has a
static IP -- so I know where it \"should be\" in my local
address space. And a name. Add each new host to the NTP
configuration so it\'s clock remains in sync with the rest.
Decide which other services you want to support.

Configure *once*. Forget. (but, leave notes as to how you
do these things so you can add another host, 6 mos from now)

The \"problems\" SWMBO calls on me to fix are: \"The printer
is broken!\" \"No, it\'s out of PAPER! See this little light,
here? See this paper drawer? See this half-consumed ream
of paper? This is how they work together WITH YOU to solve
your \'problem\'...\"

Of course, I\'ve learned to avoid certain MS services that
have proven to be unreliable or underperformant. But,
that\'s worthwhile insight gained (soas not to be bitten later)
 
D

Don Y

Guest
On 1/26/2022 8:59 PM, bitrex wrote:
Slow is relative. To a 20MHz 386, you\'d be surprised how \"fast\" an old
drive can be! :

Anyone make an ISA to SATA adapter card? Probably.

The 386 portable has no internal slots. I *do* have the expansion chassis
(a large \"bag\" that bolts on the back) -- but it only supports two slots
and I have a double-wide card that usually sits in there.

A better move is a PATA-SATA adapter (I think I have some of those...
or, maybe they are SCA-SCSI or FW or... <shrug>)

But, you\'re still stuck with the limitations of the old BIOS -- which
had no \"user defined\" disk geometry (and dates from the days when the
PC told the drive what it\'s geometry would be). To support the
600M drive, I had to edit the BIOS EPROMs (yes, that\'s how old it is!)
and fix the checksum so my changes didn\'t signal a POST fault.

[Or, maybe it was 340M like the \"spare\" I have? I can\'t recall.
Booting it to check would be tedious as the BBRAM is failed (used
a large battery for that, back then, made in Israel, IIRC). And,
\"setup\" resides on a 5\" floppy just to get in and set the parameters...
Yes, only of value to collectors! But, a small footprint way for me
to support a pair of ISA slots!]
 
D

Dave Platt

Guest
In article <HMoIJ.359537$aF1.247113@fx98.iad>,
bitrex <user@example.net> wrote:

Look to the number of sector remap events to see if the *drive* thinks
it\'s having problems.  None of mine report any such events.  (but, I
only check on that stat irregularly)

See for yourself, I don\'t know what all of this means:

https://imgur.com/a/g0EOkNO

Got the power-on hours wrong before FBE1 = 64481.

SMART still reports this drive as \"Good\"

That\'s how the numbers in that report look to me.

The raw read-error rate is very low, and hasn\'t ever been anything
other than very low.

The drive has never been unable to read data from one of its sectors.
It has never felt a need to declare a sector \"bad\" (e.g. required too
many read retries), and move its data to a spare sector. There are no
sectors which are \"pending\" that sort of reallocation. Hardware-level
error-correction-code data recoveries (e.g. low-level bit errors during
read, fully corrected by the ECC) seem quite reasonable.

It seems to be spinning up reliably when power comes on.

It does appear to have gotten hot than it wanted to be, at some point
(the on-a-scale-of-100 \"airflow temperature\" value was below
threshold). Might want to check the fans and filters and make sure
there\'s enough air flowing past the drive.
 

Welcome to EDABoard.com

Sponsor

Top