Bargain LTSpice/Lab laptop...

B

bitrex

Guest
<https://www.walmart.com/ip/Fujitsu-Lifebook-U745-Ultrabook-Refurbished-14-Intel-Core-i7-5600U-2-6-GHz-512-SSD-12-GB-Ram/116957071>

Last of the Japanese/German-made business-class machines; several years
old now but AFAIK they\'re well-built and not a PITA to work on, and even
a refurb should be good for a few more years of service...
 
On Monday, January 24, 2022 at 1:26:11 AM UTC-5, bitrex wrote:
https://www.walmart.com/ip/Fujitsu-Lifebook-U745-Ultrabook-Refurbished-14-Intel-Core-i7-5600U-2-6-GHz-512-SSD-12-GB-Ram/116957071

Last of the Japanese/German-made business-class machines; several years
old now but AFAIK they\'re well-built and not a PITA to work on, and even
a refurb should be good for a few more years of service...

I bought a 17\" new laptop with just 12 GB of RAM as a second computer when my first died and I needed something right away to copy data to off the old hard drive. It was very light and nice to use in a portable setting. But the combination of 12 GB RAM and the rotating hard drive was just too slow.. I ended up getting another 17\" inch machine with 1 TB flash drive and 16 GB of RAM expecting to have to upgrade to 32 GB... but it runs very well, even when the 16 GB is maxed out. That has to be due to the flash drive being so much faster than a rotating drive. I\'ve never bothered to upgrade it. Maybe if I were running simulations a lot that would show up... or I could just close a browser or two. They are the real memory hogs these day.

The new machine is not as light as the other one, but still much lighter than my Dell Precision gut buster. I ended up returning the 12 GB machine when I found the RAM was not upgradable.

--

Rick C.

- Get 1,000 miles of free Supercharging
- Tesla referral code - https://ts.la/richard11209
 
On 1/24/2022 3:28 PM, Rick C wrote:
On Monday, January 24, 2022 at 1:26:11 AM UTC-5, bitrex wrote:
https://www.walmart.com/ip/Fujitsu-Lifebook-U745-Ultrabook-Refurbished-14-Intel-Core-i7-5600U-2-6-GHz-512-SSD-12-GB-Ram/116957071

Last of the Japanese/German-made business-class machines; several years
old now but AFAIK they\'re well-built and not a PITA to work on, and even
a refurb should be good for a few more years of service...

I bought a 17\" new laptop with just 12 GB of RAM as a second computer when my first died and I needed something right away to copy data to off the old hard drive. It was very light and nice to use in a portable setting. But the combination of 12 GB RAM and the rotating hard drive was just too slow. I ended up getting another 17\" inch machine with 1 TB flash drive and 16 GB of RAM expecting to have to upgrade to 32 GB... but it runs very well, even when the 16 GB is maxed out. That has to be due to the flash drive being so much faster than a rotating drive. I\'ve never bothered to upgrade it. Maybe if I were running simulations a lot that would show up... or I could just close a browser or two. They are the real memory hogs these day.

The new machine is not as light as the other one, but still much lighter than my Dell Precision gut buster. I ended up returning the 12 GB machine when I found the RAM was not upgradable.

I run 32 GB on my main desktop since I upgraded to Ryzen 5 this year,
which seems adequate for just about anything I throw at it.

I\'d be surprised if that Fujitsu can\'t be upgraded to at least 16.

Another nice deal for mass storage/backups of work files are these
surplus Dell H700 hardware RAID controllers, if you have a spare 4x or
wider PCIe slot you get 8 channels of RAID 0/1 per card, the used to be
in servers probably but they work fine OOTB with Windows 10/11 and the
modern Linux distros I\'ve tried, and you don\'t have to muck with the OS
software RAID or the motherboard\'s software RAID.

Yes a RAID array isn\'t a backup but I don\'t see any reason not to have
your on-site backup in RAID 1.

<https://www.amazon.com/Dell-Controller-Standard-Profile-J9MR2/dp/B01J4744L0/>
 
On 1/24/2022 3:39 PM, bitrex wrote:
On 1/24/2022 3:28 PM, Rick C wrote:
On Monday, January 24, 2022 at 1:26:11 AM UTC-5, bitrex wrote:
https://www.walmart.com/ip/Fujitsu-Lifebook-U745-Ultrabook-Refurbished-14-Intel-Core-i7-5600U-2-6-GHz-512-SSD-12-GB-Ram/116957071


Last of the Japanese/German-made business-class machines; several years
old now but AFAIK they\'re well-built and not a PITA to work on, and even
a refurb should be good for a few more years of service...

I bought a 17\" new laptop with just 12 GB of RAM as a second computer
when my first died and I needed something right away to copy data to
off the old hard drive.  It was very light and nice to use in a
portable setting.  But the combination of 12 GB RAM and the rotating
hard drive was just too slow.  I ended up getting another 17\" inch
machine with 1 TB flash drive and 16 GB of RAM expecting to have to
upgrade to 32 GB... but it runs very well, even when the 16 GB is
maxed out.  That has to be due to the flash drive being so much faster
than a rotating drive.  I\'ve never bothered to upgrade it.  Maybe if I
were running simulations a lot that would show up... or I could just
close a browser or two.  They are the real memory hogs these day.

The new machine is not as light as the other one, but still much
lighter than my Dell Precision gut buster.  I ended up returning the
12 GB machine when I found the RAM was not upgradable.


I run 32 GB on my main desktop since I upgraded to Ryzen 5 this year,
which seems adequate for just about anything I throw at it.

I\'d be surprised if that Fujitsu can\'t be upgraded to at least 16.

Another nice deal for mass storage/backups of work files are these
surplus Dell H700 hardware RAID controllers, if you have a spare 4x or
wider PCIe slot you get 8 channels of RAID 0/1 per card, the used to be
in servers probably but they work fine OOTB with Windows 10/11 and the
modern Linux distros I\'ve tried, and you don\'t have to muck with the OS
software RAID or the motherboard\'s software RAID.

Yes a RAID array isn\'t a backup

Isn\'t a stand-alone backup \"policy\", rather
 
Rick C <gnuarm.deletethisbit@gmail.com> wrote in
news:685c2c10-c084-4d06-8f00-bf47fae4ee30n@googlegroups.com:

That has to be due to the flash drive being so much faster than a
rotating drive.

Could easily be processor related as well. Make a bigger user
defined swap space on it. It would probably run faster under Ubuntu
(any Linux) as well.

I have a now three year old 17\" Lenovo P71 as my main PC.

It has an SSD as well as a spinning drive in it but is powered by a
graphics workstation class Xeon and Quadro graphics pushing a 4k
display and would push several more via the thrunderbolt I/O ports.
And it is only 16GB RAM. It will likely be the last full PC machine I
own. For $3500 for a $5000 machine it ought to last for years. No
disappointments for me.

It is my 3D CAD workstation and has Windows 10 Pro Workstation on it.
I keep it fooly upgraded and have never had a problem and it benchmarks
pretty dag nab fast too. And I also have the docking station for it
which was another $250. Could never be more pleased. The only
drawback is that it weighs a ton and it is nearly impossible to find a
backpack that will fit it. I know now why college kids stay below 17\"
form factor machines.
 
On 24/01/2022 21:39, bitrex wrote:
Another nice deal for mass storage/backups of work files are these
surplus Dell H700 hardware RAID controllers, if you have a spare 4x or
wider PCIe slot you get 8 channels of RAID 0/1 per card, the used to be
in servers probably but they work fine OOTB with Windows 10/11 and the
modern Linux distros I\'ve tried, and you don\'t have to muck with the OS
software RAID or the motherboard\'s software RAID.

Yes a RAID array isn\'t a backup but I don\'t see any reason not to have
your on-site backup in RAID 1.

You use RAID for three purposes, which may be combined - to get higher
speeds (for your particular usage), to get more space (compared to a
single drive), or to get reliability and better up-time in the face of
drive failures.

Yes, you should use RAID on your backups - whether it be a server with
disk space for copies of data, or \"manual RAID1\" by making multiple
backups to separate USB flash drives. But don\'t imagine RAID is
connected with \"backup\" in any way.


From my experience with RAID, I strongly recommend you dump these kind
of hardware RAID controllers. Unless you are going for serious
top-shelf equipment with battery backup, guaranteed response time by
recovery engineers with spare parts and that kind of thing, use Linux
software raid. It is far more flexible, faster, more reliable and -
most importantly - much easier to recover in the case of hardware failure.

Any RAID system (assuming you don\'t pick RAID0) can survive a disk
failure. The important points are how you spot the problem (does your
system send you an email, or does it just put on an LED and quietly beep
to itself behind closed doors?), and how you can recover. Your fancy
hardware RAID controller card is useless when you find you can\'t get a
replacement disk that is on the manufacturer\'s \"approved\" list from a
decade ago. (With Linux, you can use /anything/ - real, virtual, local,
remote, flash, disk, whatever.) And what do you do when the RAID card
dies (yes, that happens) ? For many cards, the format is proprietary
and your data is gone unless you can find some second-hand replacement
in a reasonable time-scale. (With Linux, plug the drives into a new
system.)

I have only twice lost data from RAID systems (and had to restore them
from backup). Both times it was hardware RAID - good quality Dell and
IBM stuff. Those are, oddly, the only two hardware RAID systems I have
used. A 100% failure rate.

(BSD and probably most other *nix systems have perfectly good software
RAID too, if you don\'t like Linux.)
 
On Monday, January 24, 2022 at 5:11:25 PM UTC-5, DecadentLinux...@decadence..org wrote:
Rick C <gnuarm.del...@gmail.com> wrote in
news:685c2c10-c084-4d06...@googlegroups.com:
That has to be due to the flash drive being so much faster than a
rotating drive.
Could easily be processor related as well. Make a bigger user
defined swap space on it. It would probably run faster under Ubuntu
(any Linux) as well.

I have a now three year old 17\" Lenovo P71 as my main PC.

It has an SSD as well as a spinning drive in it but is powered by a
graphics workstation class Xeon and Quadro graphics pushing a 4k
display and would push several more via the thrunderbolt I/O ports.
And it is only 16GB RAM. It will likely be the last full PC machine I
own. For $3500 for a $5000 machine it ought to last for years. No
disappointments for me.

It is my 3D CAD workstation and has Windows 10 Pro Workstation on it.
I keep it fooly upgraded and have never had a problem and it benchmarks
pretty dag nab fast too. And I also have the docking station for it
which was another $250. Could never be more pleased. The only
drawback is that it weighs a ton and it is nearly impossible to find a
backpack that will fit it. I know now why college kids stay below 17\"
form factor machines.

I\'ve had only 17\" machines since day one of my laptops and always find an adequate bag for them. I had a couple of fabric bags which held the machines well, but when I got the Dell monster it was a tight squeeze. Then a guy was selling leather at Costco. I bought a wallet and a brief bag (not hard sided so I can\'t call it a case). It\'s not quite a computer bag as it has no padding, not even for the corners. Again, the Dell fit, but tightly. Now that I have this thing (a Lenovo which I swore I would never buy again, but here I am) and could even fit the lesser 17 inch laptop in the bag at the same time! It doesn\'t have as many nooks and crannies, but everything fits and the bag drops into the sizer at the airport for a \"personal\" bag..

I\'ve always been anxious about bags on airplanes. I\'ve seen too many cases of the ticket guys being jerks and making people pay for extra baggage or even requiring them to check bags that don\'t fit the outline. I was boarding my most recent flight and the guy didn\'t like my plastic grocery store bag asking what was in it. I told him it was food for the flight and clothing. I was going from 32 °F to 82 °F and had a bulky sweater and warm gloves I had already taken off before the flight. The guy told me to put the clothes in the computer bag as if they would fit!!! I pushed back explaining this was what I had to wear to get to the airport without having hypothermia. He had to mull that over and let me board the plane. WTF??!!!

I saw another guy doing the same thing with a family who\'s children had plastic bags with souvenir stuffed animals or something. Spirit wants $65 each for carry on at the gate. He didn\'t even recommend that they stuff it all into a single bag. Total jerk! No wonder I don\'t like airlines.

--

Rick C.

+ Get 1,000 miles of free Supercharging
+ Tesla referral code - https://ts.la/richard11209
 
On 1/24/2022 5:48 PM, David Brown wrote:
On 24/01/2022 21:39, bitrex wrote:

Another nice deal for mass storage/backups of work files are these
surplus Dell H700 hardware RAID controllers, if you have a spare 4x or
wider PCIe slot you get 8 channels of RAID 0/1 per card, the used to be
in servers probably but they work fine OOTB with Windows 10/11 and the
modern Linux distros I\'ve tried, and you don\'t have to muck with the OS
software RAID or the motherboard\'s software RAID.

Yes a RAID array isn\'t a backup but I don\'t see any reason not to have
your on-site backup in RAID 1.


You use RAID for three purposes, which may be combined - to get higher
speeds (for your particular usage), to get more space (compared to a
single drive), or to get reliability and better up-time in the face of
drive failures.

Yes, you should use RAID on your backups - whether it be a server with
disk space for copies of data, or \"manual RAID1\" by making multiple
backups to separate USB flash drives. But don\'t imagine RAID is
connected with \"backup\" in any way.


From my experience with RAID, I strongly recommend you dump these kind
of hardware RAID controllers. Unless you are going for serious
top-shelf equipment with battery backup, guaranteed response time by
recovery engineers with spare parts and that kind of thing, use Linux
software raid. It is far more flexible, faster, more reliable and -
most importantly - much easier to recover in the case of hardware failure.

Any RAID system (assuming you don\'t pick RAID0) can survive a disk
failure. The important points are how you spot the problem (does your
system send you an email, or does it just put on an LED and quietly beep
to itself behind closed doors?), and how you can recover. Your fancy
hardware RAID controller card is useless when you find you can\'t get a
replacement disk that is on the manufacturer\'s \"approved\" list from a
decade ago. (With Linux, you can use /anything/ - real, virtual, local,
remote, flash, disk, whatever.) And what do you do when the RAID card
dies (yes, that happens) ? For many cards, the format is proprietary
and your data is gone unless you can find some second-hand replacement
in a reasonable time-scale. (With Linux, plug the drives into a new
system.)

I have only twice lost data from RAID systems (and had to restore them
from backup). Both times it was hardware RAID - good quality Dell and
IBM stuff. Those are, oddly, the only two hardware RAID systems I have
used. A 100% failure rate.

(BSD and probably most other *nix systems have perfectly good software
RAID too, if you don\'t like Linux.)

I\'m considering a hybrid scheme where the system partition is put on the
HW controller in RAID 1, non-critical files but want fast access to like
audio/video are on HW RAID 0, and the more critical long-term on-site
mass storage that\'s not accessed too much is in some kind of software
redundant-RAID equivalent, with changes synced to cloud backup service.

That way you can boot from something other than the dodgy motherboard
software-RAID but you\'re not dead in the water if the OS drive fails,
and can probably use the remaining drive to create a today-image of the
system partition to restore from.

Worst-case you restore the system drive from your last image or from
scratch if you have to, restoring the system drive from scratch isn\'t a
crisis but it is seriously annoying, and most people don\'t do system
drive images every day
 
On 1/24/2022 5:48 PM, David Brown wrote:


Any RAID system (assuming you don\'t pick RAID0) can survive a disk
failure. The important points are how you spot the problem (does your
system send you an email, or does it just put on an LED and quietly beep
to itself behind closed doors?), and how you can recover. Your fancy
hardware RAID controller card is useless when you find you can\'t get a
replacement disk that is on the manufacturer\'s \"approved\" list from a
decade ago. (With Linux, you can use /anything/ - real, virtual, local,
remote, flash, disk, whatever.) And what do you do when the RAID card
dies (yes, that happens) ? For many cards, the format is proprietary
and your data is gone unless you can find some second-hand replacement
in a reasonable time-scale. (With Linux, plug the drives into a new
system.)
It\'s not the last word in backup, why should I have to do any of that I
just go get new modern controller and drives and restore from my
off-site backup...
 
On 1/24/2022 6:14 PM, bitrex wrote:
It\'s not the last word in backup, why should I have to do any of that I just go
get new modern controller and drives and restore from my off-site backup...

Exactly. If your drives are \"suspect\", then why are you still using them?
RAID is a complication that few folks really *need*.

If you are using it, then you should feel 100.0% confident in taking
a drive out of the array, deliberately scribbling on random sectors
and then reinstalling in the array to watch it recover. A good exercise
to remind you what the process will be like when/if it happens for real.
(Just like doing an unnecessary \"restore\" from a backup).

RAID (5+) is especially tedious (and wasteful) with large arrays.
Each of my workstations has 5T spinning. Should I add another ~8T just
to be sure that first 5T remains intact? Or, should I have another
way of handling the (low probability) event of having to restore some
\"corrupted\" (or, accidentally deleted?) portion of the filesystem?

Image your system disk (and any media that host applications).
Then, backup your working files semi-regularly.

I\'ve \"lost\" two drives in ~40 years: one in a laptop that I
had configured as a 24/7/365 appliance (I\'m guessing the drive
didn\'t like spinning up and down constantly; I may have been
able to prolong its life by NOT letting it spin down) and
another drive that developed problems in the boot record
(and was too small -- 160GB -- to bother trying to salvage).

[Note that I have ~200 drives deployed, here]
 
On 1/24/2022 1:39 PM, bitrex wrote:
I run 32 GB on my main desktop since I upgraded to Ryzen 5 this year, which
seems adequate for just about anything I throw at it.

Depends, of course, on \"what you throw at it\". Most of my workstations
have 144G of RAM, 5T of rust. My smallest (for writing software) has
just 48G. The CAD, EDA and document prep workstations can easily eat
gobs of RAM to avoid paging to disk. Some of my SfM \"exercises\" will
eat every byte that\'s available!

I\'d be surprised if that Fujitsu can\'t be upgraded to at least 16.

Another nice deal for mass storage/backups of work files are these surplus Dell
H700 hardware RAID controllers, if you have a spare 4x or wider PCIe slot you
get 8 channels of RAID 0/1 per card, the used to be in servers probably but
they work fine OOTB with Windows 10/11 and the modern Linux distros I\'ve tried,
and you don\'t have to muck with the OS software RAID or the motherboard\'s
software RAID.

RAID is an unnecessary complication. I\'ve watched all of my peers dump
their RAID configurations in favor of simple \"copies\" (RAID0 without
the controller). Try upgrading a drive (to a larger size). Or,
moving a drive to another machine (I have 6 identical workstations
and can just pull the \"sleds\" out of one to move them to another
machine if the first machine dies -- barring license issues).

If you experience failures, then you assign value to the mechanism
that protects against those failures. OTOH, if you *don\'t*, then
there any costs associated with those mechanisms become the dominant
factor in your usage decisions. I.e., if they make other \"normal\"
activities (disk upgrades) more tedious, then that counts against
them, nullifying their intended value.

E.g., most folks experience PEBKAC failures which RAID won\'t prevent.
Yet, still are lazy about backups (that could alleviate those failures).

Yes a RAID array isn\'t a backup but I don\'t see any reason not to have your
on-site backup in RAID 1.

I use surplus \"shelfs\" as JBOD with a SAS controller. This allows me to
also pull a drive from a shelf and install it directly in another machine
without having to muck with taking apart an array, etc.

Think about it, do you ever have to deal with a (perceived) \"failure\"
when you have lots of *spare* time on your hands? More likely, you
are in the middle of something and not keen on being distracted by
a \"maintenance\" issue.

[In the early days of the PC, I found having duplicate systems to be
a great way to verify a problem was software related vs. a \"machine
problem\": pull drive, install in identical machine and see if the
same behavior manifests. Also good when you lose a power supply
or some other critical bit of hardware and can work around it just by
moving media (I keep 3 spare power supplies for my workstations
as a prophylactic measure) :> ]
 
On 1/24/2022 11:39 PM, Don Y wrote:
On 1/24/2022 6:14 PM, bitrex wrote:
It\'s not the last word in backup, why should I have to do any of that
I just go get new modern controller and drives and restore from my
off-site backup...

Exactly.  If your drives are \"suspect\", then why are you still using them?
RAID is a complication that few folks really *need*.

If you are using it, then you should feel 100.0% confident in taking
a drive out of the array, deliberately scribbling on random sectors
and then reinstalling in the array to watch it recover.  A good exercise
to remind you what the process will be like when/if it happens for real.
(Just like doing an unnecessary \"restore\" from a backup).

RAID (5+) is especially tedious (and wasteful) with large arrays.
Each of my workstations has 5T spinning.  Should I add another ~8T just
to be sure that first 5T remains intact?  Or, should I have another
way of handling the (low probability) event of having to restore some
\"corrupted\" (or, accidentally deleted?) portion of the filesystem?

Image your system disk (and any media that host applications).
Then, backup your working files semi-regularly.

I\'ve \"lost\" two drives in ~40 years:  one in a laptop that I
had configured as a 24/7/365 appliance (I\'m guessing the drive
didn\'t like spinning up and down constantly; I may have been
able to prolong its life by NOT letting it spin down) and
another drive that developed problems in the boot record
(and was too small -- 160GB -- to bother trying to salvage).

[Note that I have ~200 drives deployed, here]

The advantage I see in RAID-ing the system drive and projects drive is
avoidance of downtime mainly; the machine stays usable while you prepare
the restore solution.

Enterprise situation you have other machines and a enterprise-class
network and Internet connection to aid in this process, I have a small
home office with one \"business class\" desktop PC and a consumer Internet
connection, if there are a lot of files to restore the off-site backup
place may have to mail you a disk.

Ideally I don\'t have to do that either I just go to the local NAS
nightly-backup but maybe lose some of the day\'s work if I only have one
projects drive and it\'s failed. Not the worst thing but with a hot image
you don\'t have to lose anything unless you\'re very unlucky and the
second drive fails while you do an emergency sync.

But particularly if the OS drive goes down it\'s very helpful to still
have a usable desktop that can assist in its own recovery.
 
On 1/24/2022 10:30 PM, bitrex wrote:

Image your system disk (and any media that host applications).
Then, backup your working files semi-regularly.

I\'ve \"lost\" two drives in ~40 years: one in a laptop that I
had configured as a 24/7/365 appliance (I\'m guessing the drive
didn\'t like spinning up and down constantly; I may have been
able to prolong its life by NOT letting it spin down) and
another drive that developed problems in the boot record
(and was too small -- 160GB -- to bother trying to salvage).

[Note that I have ~200 drives deployed, here]

The advantage I see in RAID-ing the system drive and projects drive is
avoidance of downtime mainly; the machine stays usable while you prepare the
restore solution.

But, in practice, how often HAS that happened? And, *why*? I.e.,
were you using old/shitty drives (and should have \"known better\")?
How \"anxious\" will you be knowing that you are operating on a
now faulted machine?

Enterprise situation you have other machines and a enterprise-class network and
Internet connection to aid in this process, I have a small home office with one
\"business class\" desktop PC and a consumer Internet connection, if there are a
lot of files to restore the off-site backup place may have to mail you a disk.

Get a NAS/SAN. Or, \"build\" one using an old PC (that you have \"outgrown\").
Note that all you need the NAS/SAN/homegrown-solution to do is be \"faster\"
than your off-site solution.

Keep a laptop in the closet for times when you need to access the outside
world while your primary machine is dead (e.g., to research the problem,
download drivers, etc.)

I have a little headless box that runs my DNS/TFTP/NTP/font/etc. services.
It\'s a pokey little Atom @ 1.6GHz/4GB with a 500G laptop drive.
Plenty fast enough for the \"services\" that it regularly provides.

But, it\'s also \"available\" 24/7/365 (because the services that it provides
are essential to EVERY machine in the house, regardless of the time of day I
might choose to use them) so I can always push a tarball onto it to take a
snapshot of whatever I\'m working on at the time. (Hence the reason for such
a large drive on what is actually just an appliance).

Firing up a NAS/SAN is an extra step that I would tend to avoid -- because
it\'s not normally up and running. By contrast, the little Atom box always
*is* (so, let it serve double-duty as a small NAS).

Ideally I don\'t have to do that either I just go to the local NAS
nightly-backup but maybe lose some of the day\'s work if I only have one
projects drive and it\'s failed. Not the worst thing but with a hot image you
don\'t have to lose anything unless you\'re very unlucky and the second drive
fails while you do an emergency sync.

I see data as falling in several categories, each with different recovery
costs:
- the OS
- applications
- support \"libraries\"/collections
- working files

The OS is the biggest PITA to install/restore as it\'s installation often
means other things that depend on it must subsequently be (re)installed.

Applications represent the biggest time sink because each has licenses
and configurations that need to be recreated.

Libraries/collections tend to just be *large* but really just
bandwidth limited -- they can be restored at any time to any
machine without tweeks.

Working files change the most frequently but, as a result, tend
to be the freshest in your mind (what did you work on, today?).
Contrast that with \"how do you configure application X to work
the way you want it to work and co-operate with application Y?\"

One tends not to do much \"original work\" in a given day -- the
number of bytes that YOU directly change is small so you can
preserve your day\'s efforts relatively easily (a lot MAY
change on your machine but most of those bytes were changed
by some *program* that responded to your small changes!).

Backing up libraries/collections is just a waste of disk space;
reinstalling from the originals (archive) takes just as long!

Applications can be restored from an image created just after
you installed the most recent application/update (this also
gives you a clean copy of the OS).

Restoring JUST the OS is useful if you are repurposing a
machine and, thus, want to install a different set of
applications. If you\'re at this point, you\'ve likely got
lots of manual work ahead of you as you select and install
each of those apps -- before you can actually put them to use!

I am religious about keeping *only* applications and OS on the
\"system disk\". So, at any time, I can reinstall the image and
know that I\'ve not \"lost anything\" (of substance) in the process.
Likewise, not letting applications creep onto non-system disks.

This last bit is subtly important because you want to be
able to *remove* a \"non-system\" disk and not impact the
operation of that machine.

[I\'ve designed some \"fonts\" [sic] for use in my documents.
Originally, I kept the fonts -- and the associated working
files used to create them -- in a folder alongside those
documents. On a non-system/working disk. Moving those
documents/fonts is then \"complicated\" (not unduly so) because
the system wants to keep a handle to the fonts hosted on it!]

But particularly if the OS drive goes down it\'s very helpful to still have a
usable desktop that can assist in its own recovery.

Hence the laptop(s). Buy a SATA USB dock. It makes it a lot easier
to use (and access) \"bare\" drives -- from *any* machine!
 
On 25/01/2022 02:03, bitrex wrote:
On 1/24/2022 5:48 PM, David Brown wrote:
On 24/01/2022 21:39, bitrex wrote:

Another nice deal for mass storage/backups of work files are these
surplus Dell H700 hardware RAID controllers, if you have a spare 4x or
wider PCIe slot you get 8 channels of RAID 0/1 per card, the used to be
in servers probably but they work fine OOTB with Windows 10/11 and the
modern Linux distros I\'ve tried, and you don\'t have to muck with the OS
software RAID or the motherboard\'s software RAID.

Yes a RAID array isn\'t a backup but I don\'t see any reason not to have
your on-site backup in RAID 1.


You use RAID for three purposes, which may be combined - to get higher
speeds (for your particular usage), to get more space (compared to a
single drive), or to get reliability and better up-time in the face of
drive failures.

Yes, you should use RAID on your backups - whether it be a server with
disk space for copies of data, or \"manual RAID1\" by making multiple
backups to separate USB flash drives.  But don\'t imagine RAID is
connected with \"backup\" in any way.


 From my experience with RAID, I strongly recommend you dump these kind
of hardware RAID controllers.  Unless you are going for serious
top-shelf equipment with battery backup, guaranteed response time by
recovery engineers with spare parts and that kind of thing, use Linux
software raid.  It is far more flexible, faster, more reliable and -
most importantly - much easier to recover in the case of hardware
failure.

Any RAID system (assuming you don\'t pick RAID0) can survive a disk
failure.  The important points are how you spot the problem (does your
system send you an email, or does it just put on an LED and quietly beep
to itself behind closed doors?), and how you can recover.  Your fancy
hardware RAID controller card is useless when you find you can\'t get a
replacement disk that is on the manufacturer\'s \"approved\" list from a
decade ago.  (With Linux, you can use /anything/ - real, virtual, local,
remote, flash, disk, whatever.)  And what do you do when the RAID card
dies (yes, that happens) ?  For many cards, the format is proprietary
and your data is gone unless you can find some second-hand replacement
in a reasonable time-scale.  (With Linux, plug the drives into a new
system.)

I have only twice lost data from RAID systems (and had to restore them
from backup).  Both times it was hardware RAID - good quality Dell and
IBM stuff.  Those are, oddly, the only two hardware RAID systems I have
used.  A 100% failure rate.

(BSD and probably most other *nix systems have perfectly good software
RAID too, if you don\'t like Linux.)

I\'m considering a hybrid scheme where the system partition is put on the
HW controller in RAID 1, non-critical files but want fast access to like
audio/video are on HW RAID 0, and the more critical long-term on-site
mass storage that\'s not accessed too much is in some kind of software
redundant-RAID equivalent, with changes synced to cloud backup service.

That way you can boot from something other than the dodgy motherboard
software-RAID but you\'re not dead in the water if the OS drive fails,
and can probably use the remaining drive to create a today-image of the
system partition to restore from.

Worst-case you restore the system drive from your last image or from
scratch if you have to, restoring the system drive from scratch isn\'t a
crisis but it is seriously annoying, and most people don\'t do system
drive images every day

I\'m sorry, but that sounds a lot like you are over-complicating things
because you have read somewhere that \"hardware raid is good\", \"raid 0 is
fast\", and \"software raid is unreliable\" - but you don\'t actually
understand any of it. (I\'m not trying to be insulting at all - everyone
has limited knowledge that is helped by learning more.) Let me try to
clear up a few misunderstandings, and give some suggestions.

First, I recommend you drop the hardware controllers. Unless you are
going for a serious high-end device with battery backup and the rest,
and are happy to keep a spare card on-site, it will be less reliable,
slower, less flexible and harder for recovery than Linux software RAID -
by significant margins.

(I\'ve been assuming you are using Linux, or another *nix. If you are
using Windows, then you can\'t do software raid properly and have far
fewer options.)

Secondly, audio and visual files do not need anything fast unless you
are talking about ridiculous high quality video, or serving many clients
at once. 4K video wants about 25 Mbps bandwidth - a spinning rust hard
disk will usually give you about 150 MBps - about 60 times your
requirement. Using RAID 0 will pointlessly increase your bandwidth
while making the latency worse (especially with a hardware RAID card).

Then you want other files on a software RAID with redundancy. That\'s
fine, but you\'re whole system is now needing at least 6 drives and a
specialised controller card when you could get better performance and
better recoverability with 2 drives and software RAID.

You do realise that Linux software RAID is unrelated to \"motherboard RAID\" ?
 
On 25/01/2022 05:54, Don Y wrote:
On 1/24/2022 1:39 PM, bitrex wrote:
I run 32 GB on my main desktop since I upgraded to Ryzen 5 this year,
which seems adequate for just about anything I throw at it.

Depends, of course, on \"what you throw at it\".  Most of my workstations
have 144G of RAM, 5T of rust.  My smallest (for writing software) has
just 48G.  The CAD, EDA and document prep workstations can easily eat
gobs of RAM to avoid paging to disk.  Some of my SfM \"exercises\" will
eat every byte that\'s available!

I\'d be surprised if that Fujitsu can\'t be upgraded to at least 16.

Another nice deal for mass storage/backups of work files are these
surplus Dell H700 hardware RAID controllers, if you have a spare 4x or
wider PCIe slot you get 8 channels of RAID 0/1 per card, the used to
be in servers probably but they work fine OOTB with Windows 10/11 and
the modern Linux distros I\'ve tried, and you don\'t have to muck with
the OS software RAID or the motherboard\'s software RAID.

RAID is an unnecessary complication.  I\'ve watched all of my peers dump
their RAID configurations in favor of simple \"copies\" (RAID0 without
the controller).  Try upgrading a drive (to a larger size).  Or,
moving a drive to another machine (I have 6 identical workstations
and can just pull the \"sleds\" out of one to move them to another
machine if the first machine dies -- barring license issues).

If you have only two disks, then it is much better to use one for an
independent copy than to have them as RAID. RAID (not RAID0, which has
no redundancy) avoids downtime if you have a hardware failure on a
drive. But it does nothing to help user error, file-system corruption,
malware attacks, etc. A second independent copy of the data is vastly
better there.

But the problems you mention are from hardware RAID cards. With Linux
software raid you can usually upgrade your disks easily (full
re-striping can take a while, but that goes in the background). You can
move your disks to other systems - I\'ve done that, and it\'s not a
problem. Some combinations are harder for upgrades if you go for more
advanced setups - such as striped RAID10 which can let you take two
spinning rust disks and get lower latency and higher read throughout
than a hardware RAID0 setup could possibly do while also having full
redundancy (at the expense of slower writes).

If you experience failures, then you assign value to the mechanism
that protects against those failures.  OTOH, if you *don\'t*, then
there any costs associated with those mechanisms become the dominant
factor in your usage decisions.  I.e., if they make other \"normal\"
activities (disk upgrades) more tedious, then that counts against
them, nullifying their intended value.

Such balances and trade-offs are important to consider. It sounds like
you have redundancy from having multiple workstations - it\'s a lot more
common to have a single workstation, and thus redundant disks can be a
good idea.

E.g., most folks experience PEBKAC failures which RAID won\'t prevent.
Yet, still are lazy about backups (that could alleviate those failures).

That is absolutely true - backups are more important than RAID.

Yes a RAID array isn\'t a backup but I don\'t see any reason not to have
your on-site backup in RAID 1.

I use surplus \"shelfs\" as JBOD with a SAS controller.  This allows me to
also pull a drive from a shelf and install it directly in another machine
without having to muck with taking apart an array, etc.

Think about it, do you ever have to deal with a (perceived) \"failure\"
when you have lots of *spare* time on your hands?  More likely, you
are in the middle of something and not keen on being distracted by
a \"maintenance\" issue.

Thus the minimised downtime you get from RAID is a good idea!

[In the early days of the PC, I found having duplicate systems to be
a great way to verify a problem was software related vs. a \"machine
problem\":  pull drive, install in identical machine and see if the
same behavior manifests.  Also good when you lose a power supply
or some other critical bit of hardware and can work around it just by
moving media (I keep 3 spare power supplies for my workstations
as a prophylactic measure)  :> ]

Having a few spare parts on-hand is useful.
 
On 1/25/2022 11:18 AM, David Brown wrote:
On 25/01/2022 02:03, bitrex wrote:
On 1/24/2022 5:48 PM, David Brown wrote:
On 24/01/2022 21:39, bitrex wrote:

Another nice deal for mass storage/backups of work files are these
surplus Dell H700 hardware RAID controllers, if you have a spare 4x or
wider PCIe slot you get 8 channels of RAID 0/1 per card, the used to be
in servers probably but they work fine OOTB with Windows 10/11 and the
modern Linux distros I\'ve tried, and you don\'t have to muck with the OS
software RAID or the motherboard\'s software RAID.

Yes a RAID array isn\'t a backup but I don\'t see any reason not to have
your on-site backup in RAID 1.


You use RAID for three purposes, which may be combined - to get higher
speeds (for your particular usage), to get more space (compared to a
single drive), or to get reliability and better up-time in the face of
drive failures.

Yes, you should use RAID on your backups - whether it be a server with
disk space for copies of data, or \"manual RAID1\" by making multiple
backups to separate USB flash drives.  But don\'t imagine RAID is
connected with \"backup\" in any way.


 From my experience with RAID, I strongly recommend you dump these kind
of hardware RAID controllers.  Unless you are going for serious
top-shelf equipment with battery backup, guaranteed response time by
recovery engineers with spare parts and that kind of thing, use Linux
software raid.  It is far more flexible, faster, more reliable and -
most importantly - much easier to recover in the case of hardware
failure.

Any RAID system (assuming you don\'t pick RAID0) can survive a disk
failure.  The important points are how you spot the problem (does your
system send you an email, or does it just put on an LED and quietly beep
to itself behind closed doors?), and how you can recover.  Your fancy
hardware RAID controller card is useless when you find you can\'t get a
replacement disk that is on the manufacturer\'s \"approved\" list from a
decade ago.  (With Linux, you can use /anything/ - real, virtual, local,
remote, flash, disk, whatever.)  And what do you do when the RAID card
dies (yes, that happens) ?  For many cards, the format is proprietary
and your data is gone unless you can find some second-hand replacement
in a reasonable time-scale.  (With Linux, plug the drives into a new
system.)

I have only twice lost data from RAID systems (and had to restore them
from backup).  Both times it was hardware RAID - good quality Dell and
IBM stuff.  Those are, oddly, the only two hardware RAID systems I have
used.  A 100% failure rate.

(BSD and probably most other *nix systems have perfectly good software
RAID too, if you don\'t like Linux.)

I\'m considering a hybrid scheme where the system partition is put on the
HW controller in RAID 1, non-critical files but want fast access to like
audio/video are on HW RAID 0, and the more critical long-term on-site
mass storage that\'s not accessed too much is in some kind of software
redundant-RAID equivalent, with changes synced to cloud backup service.

That way you can boot from something other than the dodgy motherboard
software-RAID but you\'re not dead in the water if the OS drive fails,
and can probably use the remaining drive to create a today-image of the
system partition to restore from.

Worst-case you restore the system drive from your last image or from
scratch if you have to, restoring the system drive from scratch isn\'t a
crisis but it is seriously annoying, and most people don\'t do system
drive images every day

I\'m sorry, but that sounds a lot like you are over-complicating things
because you have read somewhere that \"hardware raid is good\", \"raid 0 is
fast\", and \"software raid is unreliable\" - but you don\'t actually
understand any of it. (I\'m not trying to be insulting at all - everyone
has limited knowledge that is helped by learning more.) Let me try to
clear up a few misunderstandings, and give some suggestions.

Well, Windows software raid is what it is and unfortunately on my main
desktop I\'m constrained to Windows.

On another PC like if I build a NAS box myself I have other options.

First, I recommend you drop the hardware controllers. Unless you are
going for a serious high-end device with battery backup and the rest,
and are happy to keep a spare card on-site, it will be less reliable,
slower, less flexible and harder for recovery than Linux software RAID -
by significant margins.

It seems shocking that Linux software RAID could approach the
performance of a late-model cached hardware controller that can spend
it\'s entire existence optimizing the performance of that cache. But I
don\'t know how to do the real-world testing for my own use-case to know.
I think they probably compare well in benchmarks.

(I\'ve been assuming you are using Linux, or another *nix. If you are
using Windows, then you can\'t do software raid properly and have far
fewer options.)

Not on my main desktop, unfortunately not. I run Linux on my laptops. If
I built a second PC for a file server I would put Linux on it but my
\"NAS\" backup is a dumb eSATA external drive at the moment

Secondly, audio and visual files do not need anything fast unless you
are talking about ridiculous high quality video, or serving many clients
at once. 4K video wants about 25 Mbps bandwidth - a spinning rust hard
disk will usually give you about 150 MBps - about 60 times your
requirement. Using RAID 0 will pointlessly increase your bandwidth
while making the latency worse (especially with a hardware RAID card).

Yes, the use cases are important, sorry for not mentioning it but I
didn\'t expect to get into a discussion about it in the first place!
Sometimes I stream many dozens of audio files simultaneously from disk e.g.

<https://www.spitfireaudio.com/shop/a-z/bbc-symphony-orchestra-core/>

Sequential read/write performance on a benchmark for two 2TB 7200 RPM
drives (https://www.amazon.com/gp/product/B07H2RR55Q/) in RAID 0 on the
Perc 700 controller seems rather good on Windows, approaching that of my
OS SSD:

<https://imgur.com/a/2svt7nY>

Naturally the random 4k R/Ws suck. I haven\'t profiled it against the
equivalent for Windows Storage Spaces.


Then you want other files on a software RAID with redundancy. That\'s
fine, but you\'re whole system is now needing at least 6 drives and a
specialised controller card when you could get better performance and
better recoverability with 2 drives and software RAID.

You do realise that Linux software RAID is unrelated to \"motherboard RAID\" ?

Yep
 
On 1/25/2022 3:43 PM, bitrex wrote:

Yes, the use cases are important, sorry for not mentioning it but I
didn\'t expect to get into a discussion about it in the first place!
Sometimes I stream many dozens of audio files simultaneously from disk e.g.

https://www.spitfireaudio.com/shop/a-z/bbc-symphony-orchestra-core/

Sequential read/write performance on a benchmark for two 2TB 7200 RPM
drives (https://www.amazon.com/gp/product/B07H2RR55Q/) in RAID 0 on the
Perc 700 controller seems rather good on Windows, approaching that of my
OS SSD:

https://imgur.com/a/2svt7nY

Naturally the random 4k R/Ws suck. I haven\'t profiled it against the
equivalent for Windows Storage Spaces.

These are pretty consumer 7200 RPM drives too, not high-end by any means.
 
On 25/01/2022 21:43, bitrex wrote:
On 1/25/2022 11:18 AM, David Brown wrote:
On 25/01/2022 02:03, bitrex wrote:
On 1/24/2022 5:48 PM, David Brown wrote:
On 24/01/2022 21:39, bitrex wrote:

Another nice deal for mass storage/backups of work files are these
surplus Dell H700 hardware RAID controllers, if you have a spare 4x or
wider PCIe slot you get 8 channels of RAID 0/1 per card, the used
to be
in servers probably but they work fine OOTB with Windows 10/11 and the
modern Linux distros I\'ve tried, and you don\'t have to muck with
the OS
software RAID or the motherboard\'s software RAID.

Yes a RAID array isn\'t a backup but I don\'t see any reason not to have
your on-site backup in RAID 1.


You use RAID for three purposes, which may be combined - to get higher
speeds (for your particular usage), to get more space (compared to a
single drive), or to get reliability and better up-time in the face of
drive failures.

Yes, you should use RAID on your backups - whether it be a server with
disk space for copies of data, or \"manual RAID1\" by making multiple
backups to separate USB flash drives.  But don\'t imagine RAID is
connected with \"backup\" in any way.


  From my experience with RAID, I strongly recommend you dump these
kind
of hardware RAID controllers.  Unless you are going for serious
top-shelf equipment with battery backup, guaranteed response time by
recovery engineers with spare parts and that kind of thing, use Linux
software raid.  It is far more flexible, faster, more reliable and -
most importantly - much easier to recover in the case of hardware
failure.

Any RAID system (assuming you don\'t pick RAID0) can survive a disk
failure.  The important points are how you spot the problem (does your
system send you an email, or does it just put on an LED and quietly
beep
to itself behind closed doors?), and how you can recover.  Your fancy
hardware RAID controller card is useless when you find you can\'t get a
replacement disk that is on the manufacturer\'s \"approved\" list from a
decade ago.  (With Linux, you can use /anything/ - real, virtual,
local,
remote, flash, disk, whatever.)  And what do you do when the RAID card
dies (yes, that happens) ?  For many cards, the format is proprietary
and your data is gone unless you can find some second-hand replacement
in a reasonable time-scale.  (With Linux, plug the drives into a new
system.)

I have only twice lost data from RAID systems (and had to restore them
from backup).  Both times it was hardware RAID - good quality Dell and
IBM stuff.  Those are, oddly, the only two hardware RAID systems I have
used.  A 100% failure rate.

(BSD and probably most other *nix systems have perfectly good software
RAID too, if you don\'t like Linux.)

I\'m considering a hybrid scheme where the system partition is put on the
HW controller in RAID 1, non-critical files but want fast access to like
audio/video are on HW RAID 0, and the more critical long-term on-site
mass storage that\'s not accessed too much is in some kind of software
redundant-RAID equivalent, with changes synced to cloud backup service.

That way you can boot from something other than the dodgy motherboard
software-RAID but you\'re not dead in the water if the OS drive fails,
and can probably use the remaining drive to create a today-image of the
system partition to restore from.

Worst-case you restore the system drive from your last image or from
scratch if you have to, restoring the system drive from scratch isn\'t a
crisis but it is seriously annoying, and most people don\'t do system
drive images every day

I\'m sorry, but that sounds a lot like you are over-complicating things
because you have read somewhere that \"hardware raid is good\", \"raid 0 is
fast\", and \"software raid is unreliable\" - but you don\'t actually
understand any of it.  (I\'m not trying to be insulting at all - everyone
has limited knowledge that is helped by learning more.)  Let me try to
clear up a few misunderstandings, and give some suggestions.

Well, Windows software raid is what it is and unfortunately on my main
desktop I\'m constrained to Windows.

OK.

On desktop windows, \"Intel motherboard RAID\" is as good as it gets for
increased reliability and update. It is more efficient than hardware
raid, and the formats used are supported by any other motherboard and
also by Linux md raid - thus if the box dies, you can connect the disks
into a Linux machine (by SATA-to-USB converter or whatever is
convenient) and have full access.

Pure Windows software raid can only be used on non-system disks, AFAIK,
though details vary between Windows versions.

These days, however, you get higher reliability (and much higher speed)
with just a single M2 flash disk rather than RAID1 of two spinning rust
disks. Use something like Clonezilla to make a backup image of the disk
to have a restorable system image.

On another PC like if I build a NAS box myself I have other options.

First, I recommend you drop the hardware controllers.  Unless you are
going for a serious high-end device with battery backup and the rest,
and are happy to keep a spare card on-site, it will be less reliable,
slower, less flexible and harder for recovery than Linux software RAID -
by significant margins.

It seems shocking that Linux software RAID could approach the
performance of a late-model cached hardware controller that can spend
it\'s entire existence optimizing the performance of that cache. But I
don\'t know how to do the real-world testing for my own use-case to know.
I think they probably compare well in benchmarks.

Shocking or not, that\'s the reality. (This is in reference to Linux md
software raid - I don\'t know details of software raid on other systems.)

There was a time when hardware raid cards we much faster, but many
things have changed:

1. It used to be a lot faster to do the RAID calculations (xor for
RAID5, and more complex operations for RAID6) in dedicated ASICs than in
processors. Now processors can handle these with a few percent usage of
one of their many cores.

2. Saturating the bandwidth of multiple disks used to require a
significant proportion of the IO bandwidth of the processor and
motherboard, so that having the data duplication for redundant RAID
handled by a dedicated card reduced the load on the motherboard buses.
Now it is not an issue - even with flash disks.

3. It used to be that hardware raid cards reduced the latency for some
accesses because they had dedicated cache memory (this was especially
true for Windows, which has always been useless at caching disk data
compared to Linux). Now with flash drives, the extra card /adds/ latency.

4. Software raid can make smarter use of multiple disks, especially when
reading. For a simple RAID1 (duplicate disks), a hardware raid card can
only handle the reads as being from a single virtual disk. With
software RAID1, the OS can coordinate accesses to all disks
simultaneously, and use its knowledge of the real layout to reduce
latencies.

5. Hardware raid cards have very limited and fixed options for raid
layout. Software raid can let you have options that give different
balances for different needs. For a read-mostly layout on two disks,
Linux raid10 can give you better performance than raid0 (hardware or
software) while also having redundancy.
<https://en.wikipedia.org/wiki/Non-standard_RAID_levels#LINUX-MD-RAID-10>



(I\'ve been assuming you are using Linux, or another *nix.  If you are
using Windows, then you can\'t do software raid properly and have far
fewer options.)

Not on my main desktop, unfortunately not. I run Linux on my laptops. If
I built a second PC for a file server I would put Linux on it but my
\"NAS\" backup is a dumb eSATA external drive at the moment

Secondly, audio and visual files do not need anything fast unless you
are talking about ridiculous high quality video, or serving many clients
at once.  4K video wants about 25 Mbps bandwidth - a spinning rust hard
disk will usually give you about 150 MBps - about 60 times your
requirement.  Using RAID 0 will pointlessly increase your bandwidth
while making the latency worse (especially with a hardware RAID card).

Yes, the use cases are important, sorry for not mentioning it but I
didn\'t expect to get into a discussion about it in the first place!
Sometimes I stream many dozens of audio files simultaneously from disk e.g.

https://www.spitfireaudio.com/shop/a-z/bbc-symphony-orchestra-core/

Sequential read/write performance on a benchmark for two 2TB 7200 RPM
drives (https://www.amazon.com/gp/product/B07H2RR55Q/) in RAID 0 on the
Perc 700 controller seems rather good on Windows, approaching that of my
OS SSD:

https://imgur.com/a/2svt7nY

Naturally the random 4k R/Ws suck. I haven\'t profiled it against the
equivalent for Windows Storage Spaces.

SATA is limited to 500 MB/s. A good spinning rust can get up to about
200 MB/s for continuous reads. RAID0 of two spinning rusts can
therefore get fairly close to the streaming read speed of a SATA flash SSD.

Note that a CD-quality uncompressed audio stream is 0.17 MB/s. 24-bit,
192 kHz uncompressed is about 1 MB/s. That is, a /single/ spinning rust
disk (with an OS that will cache sensibly) will handle nearly 200
hundred such streams.


Now for a little bit on prices, which I will grab from Newegg as a
random US supplier, using random component choices and approximate
prices to give a rough idea.

2TB 7200rpm spinning rust - $50
Perc H700 (if you can find one) - $150

2TB 2.5\" SSD - $150

2TB M2 SSD - $170


So for the price of your hardware raid card and two spinning rusts you
could get, for example :

1. An M2 SSD with /vastly/ higher speeds than your RAID0, higher
reliability, and with a format that can be read on any modern computer
(at most you might have to buy a USB-to-M2 adaptor ($13), rather than an
outdated niche raid card).

2. 4 spinning rusts in a software raid10 setup - faster, bigger, and
better reliability.

3. A 2.5\" SSD and a spinning rust, connected in a Linux software RAID1
pair with \"write-behind\" on the rust. You get the read latency benefits
of the SSD, the combined streaming throughput of both, writes go first
to the SSD and the slow rust write speed is not a bottleneck.

There is no scenario in which hardware raid comes out on top, compared
to Linux software raid. Even if I had the raid card and the spinning
rust, I\'d throw out the raid card and have a better result.


Then you want other files on a software RAID with redundancy.  That\'s
fine, but you\'re whole system is now needing at least 6 drives and a
specialised controller card when you could get better performance and
better recoverability with 2 drives and software RAID.

You do realise that Linux software RAID is unrelated to \"motherboard
RAID\" ?

Yep
 
On Tuesday, January 25, 2022 at 12:43:35 PM UTC-8, bitrex wrote:
On 1/25/2022 11:18 AM, David Brown wrote:

First, I recommend you drop the hardware controllers. Unless you are
going for a serious high-end device...

It seems shocking that Linux software RAID could approach the
performance of a late-model cached hardware controller that can spend
it\'s entire existence optimizing the performance of that cache.

Not shocking at all; \'the performance\' that matters is rarely similar to measured
benchmarks. Even seasoned computer users can misunderstand their
needs and multiply their overhead cost needlessly, to get improvement in operation.

Pro photographers, sound engineering, and the occasional video edit shop
will need one-user big fast disks, but in the modern market, the smaller and slower
disks ARE big and fast, in absolute terms.
 
On 1/26/2022 1:04 PM, whit3rd wrote:
Pro photographers, sound engineering, and the occasional video edit shop
will need one-user big fast disks, but in the modern market, the smaller and slower
disks ARE big and fast, in absolute terms.

More importantly, they are very reliable. I come across thousands (literally)
of scrapped machines (disks) every week. I\'ve built a gizmo to wipe them and
test them in the process. The number of \"bad\" disks is a tiny fraction; most
of our discards are disks that we deem too small to bother with (250G or
smaller).

As most come out of corporate settings (desktops being consumer-quality
while servers/arrays being enterprise), they tend to have high PoH figures...
many exceeding 40K (4-5 years at 24/7). Still, no consequences to data
integrity.

Surely, if these IT departments feared for data on the thousands of
seats they maintain, they would argue for the purchase of mechanisms
to reduce that risk (as the IT department specs the devices, if they
see high failure rates, all of their consumers will bitch about the
choice that has been IMPOSED upon them!)
 

Welcome to EDABoard.com

Sponsor

Back
Top