Time to Upgrade ?:-}

On Tue, 04 Aug 2015 09:28:56 -0400, Joe Gwinn <joegwinn@comcast.net>
Gave us:

The reason tape is resistant is because it does not have a file system
on it, and cannot be accessed randomly.

But in the form of the stored file, it CAN keep and carry the malware
within the files themselves and then still be passed on.
 
On Tue, 04 Aug 2015 09:28:56 -0400, Joe Gwinn <joegwinn@comcast.net>
Gave us:

>This will not help if the files were encrypted by ransomware.

No shit. It/they are file system independent.

So tape gets you nothing but dog slow linear 'worm' type access.
 
Phil Hobbs schreef op 08/04/2015 om 03:28 AM:
On 8/3/2015 8:52 PM, N. Coesel wrote:
Phil Hobbs schreef op 08/03/2015 om 05:09 PM:

I wouldn't guarantee that a HDD sitting on a shelf would start up
reliably at that age.

Which is why you must not shelve hard drives. I choose to have all my
data online on multiple hard drives. That way I have health monitoring
of the media.
I found out I lost little bits of data by storing it on CDs or hard
drives because I misplaced them or never cared to restore when changing
computers. Making backups is one thing but organising the media in a
sensible way is a cumbersome task.

You don't want to have them all mounted to a filesystem, though, because
ransomware is spreading like a plague.

True. One backup is in a NAS and the other on an external USB hard drive
which is only mounted during a (nightly) backup so I can unplug it at will.
 
Don Y schreef op 08/04/2015 om 04:37 AM:
On 8/3/2015 5:52 PM, N. Coesel wrote:
Phil Hobbs schreef op 08/03/2015 om 05:09 PM:

I wouldn't guarantee that a HDD sitting on a shelf would start up
reliably at that age.

Which is why you must not shelve hard drives. I choose to have all my
data
online on multiple hard drives. That way I have health monitoring of
the media.

Unless you're actively *checking* the media contents while they're
spinning, you still have no idea whether or not any particular file can
be retrieved when you need it.

My daily backup routine checks the files against their 'originals' so
verification goes automatically in my setup.

I found out I lost little bits of data by storing it on CDs or hard
drives
because I misplaced them or never cared to restore when changing
computers.
Making backups is one thing but organising the media in a sensible way
is a
cumbersome task.

Then, logging the contents of each rive in a relational database -- along
with applicable metadata (e.g., MD5's of each file, size, etc.). So,
I can query the database *without* having any drives spinning -- to
locate a file of interest.

Now that is a perfect definition of cumbersome.

You still have the risk of a drive not spinning up after you have not
been needing it for several years. That is exactly the problem I faced.
Sometimes I need a file after a decade or more. If it is on some hard
drive on a shelve I have no idea about the condition of the drive let
alone whether the file still exists. All this assuming I can still plug
the hard drive in my system... I don't think my new PC even has PATA
connectors for example.
 
On 8/4/2015 9:39 AM, N. Coesel wrote:
Don Y schreef op 08/04/2015 om 04:37 AM:
On 8/3/2015 5:52 PM, N. Coesel wrote:
Phil Hobbs schreef op 08/03/2015 om 05:09 PM:

I wouldn't guarantee that a HDD sitting on a shelf would start up
reliably at that age.

Which is why you must not shelve hard drives. I choose to have all my
data
online on multiple hard drives. That way I have health monitoring of
the media.

Unless you're actively *checking* the media contents while they're
spinning, you still have no idea whether or not any particular file can
be retrieved when you need it.

My daily backup routine checks the files against their 'originals' so
verification goes automatically in my setup.

Then you're probably talking about an order of magnitude (or THREE!)
less data! I'm talking about archives/repositories -- EVERYTHING
you've ever wanted to preserve!

(How many terabytes do you "check against their originals" in each of your
daily backup routines? If a file was deleted, today -- but backed up
YESTERDAY -- what do you check its backup against?)

I found out I lost little bits of data by storing it on CDs or hard
drives
because I misplaced them or never cared to restore when changing
computers.
Making backups is one thing but organising the media in a sensible way
is a
cumbersome task.

Then, logging the contents of each rive in a relational database -- along
with applicable metadata (e.g., MD5's of each file, size, etc.). So,
I can query the database *without* having any drives spinning -- to
locate a file of interest.

Now that is a perfect definition of cumbersome.

Automagic. Essentially, doing the same sort of thing that locate.updatedb(8)
does -- that locate(1) eventually uses. Except, not constrained by requiring
everything in the locate database to be mounted when locate.updatedb(8)
runs!

You still have the risk of a drive not spinning up after you have not been
needing it for several years.

A drive never sits unused for several years. That's the point! Even if
I don't happen to need to access ANY of the files on a particular
volume, the code that walks through the archive(s) knows that it hasn't
*examined* those files in N days and requests the drive be mounted.

So, not only is the *media* accessed but the actual magnetic domains
comprising each and every file contained on that medium are routinely
"examined" and verified -- against the MD5 hash/checksum that was stored
in the database when the file was added to the archive.

In the event bit rot, surface wear, failed reallocation, etc. causes
*that* instance of *that* file to degrade (checksum no longer verifies)
or become "unavailable" (read errors, seek errors, etc.), then I get
notified that some set of files on some particular medium are now
"lost"; their *backups* must be recovered and used to recreate the
"mirror copy" (so I, once again, have two copies of each file).

Having the files on media that is spinning 24/7/365 doesn't guarantee
you that they are intact or accessible -- unless you check each and
every file "regularly" (for some definition of "regularly").

That is exactly the problem I faced. Sometimes I
need a file after a decade or more. If it is on some hard drive on a shelve I
have no idea about the condition of the drive let alone whether the file still
exists. All this assuming I can still plug the hard drive in my system... I
don't think my new PC even has PATA connectors for example.

I sidestep the PATA/SATA/SCA/SCSI/FW/etc. issue by adopting USB drives.
This allows me to connect the drive directly to *any* machine -- instead
of tethering it to *one* specific machine (which could crash).

Because the medium is used as "Just a Bunch of Files" (JBOF?? :> )
and doesn't embed any particular RAID structures, there's no need
for me to even use drives of identical sizes, or store the mirror
copy of drive1's files ENTIRELY on drive2! Some could be on drive2
while others are on drive8.

[Consider what happens to a RAID array with a failed/failing drive;
do you have a hot/cold spare of the appropriate size ON HAND? Are
you ready to rebuild the array as soon as *it* tells you that you have
a failed drive? Does it tell you when ANY portion of the drive's
contents are damaged -- even if you haven't explicitly gone looking
at those particular files??]

I.e., all of the "mechanism" that would be present in a RAID
configuration is maintained in the database and the scripts that
walk the filesystem(s) to update/maintain/verify its contents!
 
Don Y schreef op 08/04/2015 om 07:23 PM:
On 8/4/2015 9:39 AM, N. Coesel wrote:
Don Y schreef op 08/04/2015 om 04:37 AM:
On 8/3/2015 5:52 PM, N. Coesel wrote:
Phil Hobbs schreef op 08/03/2015 om 05:09 PM:

I wouldn't guarantee that a HDD sitting on a shelf would start up
reliably at that age.

Which is why you must not shelve hard drives. I choose to have all my
data
online on multiple hard drives. That way I have health monitoring of
the media.

Unless you're actively *checking* the media contents while they're
spinning, you still have no idea whether or not any particular file can
be retrieved when you need it.

My daily backup routine checks the files against their 'originals' so
verification goes automatically in my setup.

Then you're probably talking about an order of magnitude (or THREE!)
less data! I'm talking about archives/repositories -- EVERYTHING
you've ever wanted to preserve!

Currently about 600GB but I expect that to increase steadily.
 
On 8/4/2015 12:06 PM, N. Coesel wrote:
Don Y schreef op 08/04/2015 om 07:23 PM:
On 8/4/2015 9:39 AM, N. Coesel wrote:
Don Y schreef op 08/04/2015 om 04:37 AM:
On 8/3/2015 5:52 PM, N. Coesel wrote:
Phil Hobbs schreef op 08/03/2015 om 05:09 PM:

I wouldn't guarantee that a HDD sitting on a shelf would start up
reliably at that age.

Which is why you must not shelve hard drives. I choose to have all my
data
online on multiple hard drives. That way I have health monitoring of
the media.

Unless you're actively *checking* the media contents while they're
spinning, you still have no idea whether or not any particular file can
be retrieved when you need it.

My daily backup routine checks the files against their 'originals' so
verification goes automatically in my setup.

Then you're probably talking about an order of magnitude (or THREE!)
less data! I'm talking about archives/repositories -- EVERYTHING
you've ever wanted to preserve!

Currently about 600GB but I expect that to increase steadily.

I've got 1TB spinning on each of my 8 workstations. Granted, some (small
amount!) of that is operating system. But, the bulk is specific to the
activities performed *on* that particular workstation. E.g., CAD-related
files on the CAD workstation (but not on the Multimedia Authoring
workstation, etc.)

My archive is *many* TB spanning more than 30 years. As such, it's common
for files not to be "viewed/accessed" for long periods of time. I'd not
want to keep all of that "spinning, on-line" *just* so I could *hope*
it was "intact".

A friend had what he thought was the "clever" idea of keeping all his
ROM images on his PC (early 80's) thinking that they'd be backed up
each time his PC was backed up. Never occured to him that this didn't
guarantee the files were intact! Or, even *present* on the machine when
the next backup came along ("Gee, where did those files go? There's
no sign on them in yesterday's backup... or the day before... or the WEEK
before... or...")

Ask yourself how you'll know when you've INTENTIONALLY discarded a file
vs. unintentionally having accomplished the same feat.
 
On Sun, 02 Aug 2015 09:16:51 -0700, Jim Thompson
<To-Email-Use-The-Envelope-Icon@On-My-Web-Site.com> wrote:

I think it's time I upgraded my 'Spice' machine... my present machine
is as follows... no laughter please... I've successfully done at least
at least 20 chip designs on this machine. What modern equivalent
should I replace it with?

====================================

Computer Profile Summary
Computer Name:Analog3 (in ANALOG)

Profile Date:Wednesday, June 29, 2005 11:59:53 AM

Operating System
Windows 2000 Professional Service Pack3 (build 2195)

Processor a Main Circuit Board 2.20 gigahertz AMD Athlon 64

128 kilobyte primary memory cache
1024 kilobyte secondary memory cache
Bus Clock: 200 megahertz

BIOS: Phoenix Technologies, LTD 6.00PG 07/28/2004

Drives Memory Modules c,d
137.44 Gigabytes Usable Hard Drive Capacity
93.05 Gigabytes Hard Drive Free Space

LITE-ON COMBO SOHC-5232K
[CD-ROM drive]

3.5" format removeable media [Floppy
drive]

WDC WD1600JB-00EVA0 [Hard drive] (160.04 GB) SMART Status: Healthy

1024 Megabytes Installed Memory

...Jim Thompson

My new Dell PC runs Spice about 5x faster than my old HP.

HP: Dual core 1.8 GHz Xeon, 2G ram, Win XP, 2 threads in LT Spice

Dell: Quadcore 2.8GHz Xeon, 8G ram, 64-bit Win7, 4 threads

The hard drives are faster, which may help in Spice too, making huge
..RAW files.
 
DecadentLinuxUserNumeroUno wrote:
On 3 Aug 2015 09:07:59 GMT, Jasen Betts<jasen@xnet.co.nz> Gave us:

On 2015-08-02, Jim Thompson<To-Email-Use-The-Envelope-Icon@On-My-Web-Site.com> wrote:
On Sun, 02 Aug 2015 13:27:14 -0400, Phil Hobbs

Living in seclusion ;-) for quite awhile... what's the best Intel
processor for number crunching?

"Xeon Phi" FAICT

Prolly not what you want.

The new i7 units with six main cores are more consumer level. A Xeon
usually requires a better than normal motherboard as well, so are
outside what most folks want to spend.

I have a socket 2001 i7-3930k on an EVGA X79 Dark mobo.

It keeps up even with the newer class CPUs.

Why that idiot continues to include a binary group in his posts when
most NSPs do not even carry them is beyond me.
You mean that a.b.s.e. is an insufficient resource for schematics,
project pictures, etc?
 
On Tue, 04 Aug 2015 14:33:50 -0700, Robert Baer
<robertbaer@localnet.com> wrote:

DecadentLinuxUserNumeroUno wrote:
On 3 Aug 2015 09:07:59 GMT, Jasen Betts<jasen@xnet.co.nz> Gave us:

On 2015-08-02, Jim Thompson<To-Email-Use-The-Envelope-Icon@On-My-Web-Site.com> wrote:
On Sun, 02 Aug 2015 13:27:14 -0400, Phil Hobbs

Living in seclusion ;-) for quite awhile... what's the best Intel
processor for number crunching?

"Xeon Phi" FAICT

Prolly not what you want.

The new i7 units with six main cores are more consumer level. A Xeon
usually requires a better than normal motherboard as well, so are
outside what most folks want to spend.

I have a socket 2001 i7-3930k on an EVGA X79 Dark mobo.

It keeps up even with the newer class CPUs.

Why that idiot continues to include a binary group in his posts when
most NSPs do not even carry them is beyond me.
You mean that a.b.s.e. is an insufficient resource for schematics,
project pictures, etc?

What would DecadentLoser know ?>:-}

...Jim Thompson
--
| James E.Thompson | mens |
| Analog Innovations | et |
| Analog/Mixed-Signal ASIC's and Discrete Systems | manus |
| San Tan Valley, AZ 85142 Skype: skypeanalog | |
| Voice:(480)460-2350 Fax: Available upon request | Brass Rat |
| E-mail Icon at http://www.analog-innovations.com | 1962 |

I love to cook with wine. Sometimes I even put it in the food.
 
On Tue, 04 Aug 2015 14:33:50 -0700, Robert Baer
<robertbaer@localnet.com> Gave us:

You mean that a.b.s.e. is an insufficient resource for schematics,
project pictures, etc?

Read what I said, jackass. MOST NSPs no longer carry binary groups.

And most of the schematics here are in the form of netlists, not some
graphic snapshot. Grow up and smell the evolution. There are plenty of
sites for posting photos as well, and folks do not need to perform
stupid conversions first to view them.

You are worse than he is with your inane mentality.
 
On Tue, 04 Aug 2015 14:37:44 -0700, Jim Thompson
<To-Email-Use-The-Envelope-Icon@On-My-Web-Site.com> Gave us:

What would DecadentLoser know ?>:-}

...Jim Thompson

A damn sight more knowledgeable than some retarded fuck who was once a
brass rat asswipe who now has to ask about processor speeds because he
is too fucking stupid to know how to research any facts himself any
more.
 
On Tue, 04 Aug 2015 00:33:52 -0400, DecadentLinuxUserNumeroUno
<DLU1@DecadentLinuxUser.org> wrote:

On Tue, 04 Aug 2015 02:52:22 +0200, "N. Coesel" <nico@niks.nl> Gave us:

Which is why you must not shelve hard drives. I choose to have all my
data online on multiple hard drives. That way I have health monitoring
of the media.

A 'shelved' hard drive does NOT degrade.. An .'in use' hard drive
DOES. (rolls eyes)

In theory, I should agree with you. However, I once had a very
different experience. I was building RAID arrays for customers out of
identical drives. Most common was RAID 0+1 which consisted of 5
drives. After about a year of faultless operation, one drive in one
array starting to show signs of failure. I replaced the drive with a
brand new "shelved" drive, re-mirrored and continues business as
usual. A few days later, another drive started complaining, so I
replaced it. That made me worry, so I started monitoring the DPT
controller statistics only to find that the first drive that I had
previously replaced was beginning to fail. I replaced it with yet
another new "shelved" drive. During re-mirroring, another of the
original drives began to fail. I ended up copying everything to a big
single drive (from another manufacturer) and shut down the array.

Initially, I thought that there was some kind of power supply issue
that was killing the drives. I had a spare overpriced power supply
which I swapped in place of the original, but that wasn't the problem.
I couldn't check any more new "shelved" drives because I only had
three spares. I won't go into the eventual solution, as it was a bit
strange and complexicated.

The bottom line is that it appears that these drives aged at the same
rate whether spinning or sitting powered off on the shelf. My
guess(tm) is that there was some kind of IC package leakage, chemical
attack, plating deterioration, manufacturing defect that was causing
the failures.

This is the reason that I detest RAID arrays built from identical
drives, because they all tend to fail at the same time.

> Sheesh.

Gesundheit.

--
Jeff Liebermann jeffl@cruzio.com
150 Felker St #D http://www.LearnByDestroying.com
Santa Cruz CA 95060 http://802.11junk.com
Skype: JeffLiebermann AE6KS 831-336-2558
 
On Tue, 04 Aug 2015 15:16:52 -0700, Jeff Liebermann <jeffl@cruzio.com>
Gave us:

The bottom line is that it appears that these drives aged at the same
rate whether spinning or sitting powered off on the shelf. My
guess(tm) is that there was some kind of IC package leakage, chemical
attack, plating deterioration, manufacturing defect that was causing
the failures.

Yeah. The brand of the drive.

I use exclusively Seagate drives.. I used a few IBM drives when
perpendicular recording first came out because IBM was the leader in MR
head tech, and all the others licensed their IP or actual Hdw t make
their drives.

You will find that much of the commercial comm industry uses Toshiba
SAS drives currently. 2.5 inch laptop form factor but double the slim
height. I do not know what HP puts in their blades in the SAS hot swap
slots. Probably Seagate or Toshiba.

WD sucks. Consumer level crap.

Then again an SSD (either mSATA or m.2 on PCIe) RAID array is also
becoming popular and they have a regular change-out schedule 'cause the
per GB price is cheap by comparison to the days when the HDs were the
most expensive elements in the system. They got racks full of them now.
..Sort of like the DAT tape days when they were rotated daily and weekly,
etc.

I suspect the future will be a rack full of Samsung M.2 sticks on
multiple redundant RAID arrays. Dem suckers are fast.
 
On Mon, 03 Aug 2015 00:40:15 -0400 DecadentLinuxUserNumeroUno
<DLU1@DecadentLinuxUser.org> wrote in Message id:
<m0stra13447aijgc9l75mhhimlgbr5ujni@4ax.com>:

On Sun, 2 Aug 2015 19:05:15 -0700 (PDT), "dcaster@krl.org"
dcaster@krl.org> Gave us:

On Sunday, August 2, 2015 at 8:48:42 PM UTC-4, John Larkin wrote:


We backup design releases to CDs, which I store in the cave at home.
Some are 10 years old, and I occasionally have to retrieve one. So
far, it has always worked.


--

John Larkin Highland Technology, Inc
lunatic fringe electronics

jlarkin att highlandtechnology dott com
http://www.highlandtechnology.com

I have had CD's that were no longer 100 % readable.

Dan


Hard drives are cheap. Get an mSATA drive and USB drive enclosure and
every design directory you ever had can be fully backed up onto a device
which reads as fast as your HD subsystems do and will for a long time to
come.

Easy greasy Slap-it-in-and-go-soeasy.

I wouldn't be so sure of that.
http://www.pcworld.com/article/2921590/death-and-the-unplugged-ssd-how-much-you-really-need-to-worry-about-ssd-reliability.html

Additionally, the retention time is affected by how much data has been
written to an SSD. Intel's retention specification for their SSDs when the
drive is near it's rated endurance is 90 days.
 
On Wed, 05 Aug 2015 09:22:08 -0400, JW <none@dev.null> Gave us:

On Mon, 03 Aug 2015 00:40:15 -0400 DecadentLinuxUserNumeroUno
DLU1@DecadentLinuxUser.org> wrote in Message id:
m0stra13447aijgc9l75mhhimlgbr5ujni@4ax.com>:

On Sun, 2 Aug 2015 19:05:15 -0700 (PDT), "dcaster@krl.org"
dcaster@krl.org> Gave us:

On Sunday, August 2, 2015 at 8:48:42 PM UTC-4, John Larkin wrote:


We backup design releases to CDs, which I store in the cave at home.
Some are 10 years old, and I occasionally have to retrieve one. So
far, it has always worked.


--

John Larkin Highland Technology, Inc
lunatic fringe electronics

jlarkin att highlandtechnology dott com
http://www.highlandtechnology.com

I have had CD's that were no longer 100 % readable.

Dan


Hard drives are cheap. Get an mSATA drive and USB drive enclosure and
every design directory you ever had can be fully backed up onto a device
which reads as fast as your HD subsystems do and will for a long time to
come.

Easy greasy Slap-it-in-and-go-soeasy.

I wouldn't be so sure of that.
http://www.pcworld.com/article/2921590/death-and-the-unplugged-ssd-how-much-you-really-need-to-worry-about-ssd-reliability.html

Additionally, the retention time is affected by how much data has been
written to an SSD. Intel's retention specification for their SSDs when the
drive is near it's rated endurance is 90 days.

You fail to realize that a RAID array on a shelf can lose data and
still have it fully recoverable, and up to two entire volumes can be
lost and still be fully recovered from the remaining array elements.

Ooops, you lose.

Also you must not have noticed that the article was skewed and was
actually a plus for solid storage technology as those very same
environmental conditions will most certainly, and in every case, cause a
data loss on simple optical storage media, regardless of what brand you
dopes think is so reliable.
 
On Wed, 05 Aug 2015 09:54:40 -0400 DecadentLinuxUserNumeroUno
<DLU1@DecadentLinuxUser.org> wrote in Message id:
<4754satl3bpghkcignj0u19a9d0q719ldq@4ax.com>:

On Wed, 05 Aug 2015 09:22:08 -0400, JW <none@dev.null> Gave us:

On Mon, 03 Aug 2015 00:40:15 -0400 DecadentLinuxUserNumeroUno
DLU1@DecadentLinuxUser.org> wrote in Message id:
m0stra13447aijgc9l75mhhimlgbr5ujni@4ax.com>:

On Sun, 2 Aug 2015 19:05:15 -0700 (PDT), "dcaster@krl.org"
dcaster@krl.org> Gave us:

On Sunday, August 2, 2015 at 8:48:42 PM UTC-4, John Larkin wrote:


We backup design releases to CDs, which I store in the cave at home.
Some are 10 years old, and I occasionally have to retrieve one. So
far, it has always worked.


--

John Larkin Highland Technology, Inc
lunatic fringe electronics

jlarkin att highlandtechnology dott com
http://www.highlandtechnology.com

I have had CD's that were no longer 100 % readable.

Dan


Hard drives are cheap. Get an mSATA drive and USB drive enclosure and
every design directory you ever had can be fully backed up onto a device
which reads as fast as your HD subsystems do and will for a long time to
come.

Easy greasy Slap-it-in-and-go-soeasy.

I wouldn't be so sure of that.
http://www.pcworld.com/article/2921590/death-and-the-unplugged-ssd-how-much-you-really-need-to-worry-about-ssd-reliability.html

Additionally, the retention time is affected by how much data has been
written to an SSD. Intel's retention specification for their SSDs when the
drive is near it's rated endurance is 90 days.

You fail to realize that a RAID array on a shelf can lose data and
still have it fully recoverable, and up to two entire volumes can be
lost and still be fully recovered from the remaining array elements.

Who's talking about RAID? My statement was about SSDs and the (your)
idiocy of using them as archival storage.

> Ooops, you lose.

Only in your fevered, walnut-sized "mind", AW.

Also you must not have noticed that the article was skewed and was
actually a plus for solid storage technology as those very same
environmental conditions will most certainly, and in every case, cause a
data loss on simple optical storage media, regardless of what brand you
dopes think is so reliable.

Not talking about optical media either, AW. Do try to keep up, m'kay?
 
On Thu, 06 Aug 2015 09:01:13 -0400, JW <none@dev.null> Gave us:

Who's talking about RAID? My statement was about SSDs and the (your)
idiocy of using them as archival storage.

There is no idiocy, dingledorf.

I have drives which have set dormant for two years and still fire up
fine and contain all their data. I have several Linux distros installed
across several of them but typically only use one Linux variant, so
those sit dormant until I set up a new family of distros on them to
check out the next thing in Linux.

Your idiocy abounds.

And the musician who wrote that article isn't far behind you.
 
On Thu, 06 Aug 2015 09:10:55 -0400, DecadentLinuxUserNumeroUno
<DLU1@DecadentLinuxUser.org> Gave us:

On Thu, 06 Aug 2015 09:01:13 -0400, JW <none@dev.null> Gave us:

Who's talking about RAID? My statement was about SSDs and the (your)
idiocy of using them as archival storage.

There is no idiocy, dingledorf.

I have drives which have set dormant for two years and still fire up
fine and contain all their data. I have several Linux distros installed
across several of them but typically only use one Linux variant, so
those sit dormant until I set up a new family of distros on them to
check out the next thing in Linux.

Your idiocy abounds.

And the musician who wrote that article isn't far behind you.

Oh and the RAID reference was because if your "archival storage" is in
the form of a RAID array, the likelihood that you'll experience ANY data
loss is so close to nil that the shott noise has a better chance of
providing an errant bit.

SO you lose on all fronts, JW.
 
On Thursday, August 6, 2015 at 11:28:08 AM UTC-7, Jim Thompson wrote:
On Sun, 02 Aug 2015 09:16:51 -0700, Jim Thompson
To-Email-Use-The-Envelope-Icon@On-My-Web-Site.com> wrote:

I think it's time I upgraded my 'Spice' machine...

I'm getting the general impression that I should avoid 64-bit to make
sure that my legacy programs will still work. Is that correct?

I hope not! There's very little (and mostly low-end) computer hardware
that isn't 64-bit nowadays, and software support for 32-bit is
uncertain. Any software speedups will be implemented and tested
on 64-bit hardware (with lots of RAM: 32-bit tops out at 2 to 4 Gbytes)
 

Welcome to EDABoard.com

Sponsor

Back
Top