Is this Intel i7 machine good for LTSpice?

On Tue, 04 Nov 2014 23:05:17 -0800, miso <miso@sushi.com> wrote:

Joerg wrote:


Normally on the HD. But not in the DOS days, there I used (part of) an
extra 4MB that I installed for this. RAM-disk should also be possible
under Windows. Like here:

http://blog.laptopmag.com/faster-than-an-ssd-how-to-turn-extra-memory-into-a-ram-disk


I can't speak for stuff they sell at box stores, but all my drives have
cache. [64Mbytes in my case, newer drives use 128Mbytes.] There is zero
reason to do a RAM disk. I'm running a software RAID, so besides the cache,
much of the data is in RAM anyway prior to being written to disk. [The
software RAID is one reason to use error detecting RAM.]

When using a lot of caching with disk writes, make sure that you have
some kind of UPS, so that operations are actually written to the
physical disk, before the UPS power is lost. Some file systems seem to
corrupt the file system data, if the power is lost at a bad time.
 
On 05/11/2014 03:34, rickman wrote:
On 11/4/2014 8:47 PM, Joerg wrote:
rickman wrote:
On 11/4/2014 10:07 AM, Joerg wrote:

Sure, but blanket-banning any 16-bit apps is a really bad idea. It
results in lost biz opportunity for an OS maker because people will be
leery of upgrades.

Who bans 16 bit apps? I did a search and every windows through 8 runs
16 bit apps.

New 64-bit Windows OS'es do.

When I searched I found specific info on running 16 bit apps under Win
8-64 bits. It's not a problem.

Really? You should tell that to MickeySoft then. Here is their own MSKB
entry on the topic - apart from a couple of well known commercial
installers 16bit code is forbidden from running on x64 machines.

http://support.microsoft.com/kb/282423

They tried breaking InstallShield at first and that went down *really*
well! ISTR Adobe Photoshop would not install on x64 boxes at one point.

I don't doubt that hacking around in the registry with a flint axe will
make it possible to run some 16 bit code.

But really... I expect the number of customers that MS loses from 20
year old programs that won't run is in the single digits.

It's huge. MS is painfully aware that the industry is extremely sluggish
in upgrading and it doesn't take much to figure out why. Because on
production lines there are tons of stations that run 20+ year old
software. Do you honestly think a company will throw a well-working
half-million Dollar active laser trim system into the dumpster just
because some OS "requires" this?

You always bring up the tiny corner cases.

They are not corner cases. I know a major multinational corporate that
was stuck for ages using IE6 & XP because something in their huge MS
intranet implementation would not work at all with anything later. IE11
was out by the time they were able to get the system upgraded.

How many "half-million
Dollar" systems rely on 20 year old software. Anyone managing such a
machine would have replaced the piece long ago. It is no different than
any other part of the machine which wears out. I replace wiper blades
on my car every 6 months or year, I get new tires every three or four
years. I'm not going to expect to repair 20 year old computer hardware
so I will plan to replace it with new stuff and that includes the
software if necessary. But fortunately it will still run under Windows
8. :)

People with such kit tend to buy in an entire spare PC or two to keep
the thing running indefinitely. If it ain't broke don't fix it!

They don't lose customers, they lose business volume. So do their OEM
partners.

That is absurd. Like I said, I expect they have lost single digit
customers due to problems running DOS apps.

They have a lost huge numbers due to launching products that were
defective like Vista and now Win8 (which isn't actually defective in a
technical sense so much as difficult to use as a desktop machine).
That's all ok as long as the OS does not blanket-ban the old stuff.

Are you going to explain what you are talking about?


This:

http://answers.microsoft.com/en-us/windows/forum/windows_8-winapps/how-do-i-run-a-16-bit-application-with-a-64-bit/ce0b3186-c39d-4027-88ef-f802a3f74f8e


Do you realize this info is not from MS, but from someone posting in a
forum? In other words, you are planning your business around hearsay
you found on a web forum. Since you like social media...

https://www.youtube.com/watch?v=pCFkxzVs5cc

It doesn't alter the fact the the "official" MS position is that it
doesn't work.

http://support.microsoft.com/kb/282423

Some older CAD doesn't and there's the problem. There is a lot of
custom
software that is de facto irreplaceable.

I have a hard time believing that. I think this is a Joerg's world
issue. I guess no one does the sort of work you do because they can't
find the software.


No, the reason I am a rare species is probably that most folks in my
field of work are retired or no longer on earth.

If you've never dealt with custom beam field simulators or similar
specialty software you can't really understand this. They don't come out
with a new release every year or so. They come out with one, and
that's it.

Well, I guess the industry will have to shut down then. Maybe there
will be another one out in the next 20 years. In the mean time, try
running it under Windows 8, it should work.

It might be easier to create a VM and run a legacy OS like XP or DOS
6.22 in that so that the awkward software is placed in a virtual
environment that it can recognise. I have known a few things that can't
cope with being run under post XP Windows. One of my own little
utilities compiled with an ancient once brilliant DOS compiler fails on
Win7 - easy enough for me to recompile with a more modern compiler but
if I didn't have the sourcecode then a different matter altogether.

I didn't need to bother until Win7 since the same antique DOS .exe has
been perfectly serviceable since the late 1980's when it was written.

I had to get used to some of the visual differences in Win 8, but that
is not a big deal. Every new version updates the look, same as a car.

Car makers have usually been smart enough not to put the windscreen in
floor, swap round the brake and accelerator and hide the parking brake.
Not so MS Win8 is an alien environment built on the same "productivity"
idea that made Office2007 hilarious when it was rolled out. The Office
2007 software was horribly broken out of the box with race conditions in
Excel VBA handling of graphs and defaults that looked like they had been
drawn by a four year old child with crummy wax crayons.

It was fun to watch people struggle with the "helpful" ribbon!
Jokes about the paperclip aside at least he *was* trying to be helpful.


--
Regards,
Martin Brown
 
On 04/11/2014 19:19, rickman wrote:
On 11/4/2014 3:54 AM, Martin Brown wrote:
On 04/11/2014 07:20, upsidedown@downunder.com wrote:
On Mon, 03 Nov 2014 19:31:10 -0500, rickman <gnuarm@gmail.com> wrote:

Video on the motherboard is usually integrated. If you get a graphics
card it will be separate. A very few motherboards have separate video
controller on board with separate video memory.

1920x1080x60x24bits is just 120 Mpixels/s or 360 MB/s.
DDR3 memories have peak transfer rates over 10 GB/s, so the video
refresh is less than 3 % of the memory bandwidth.

And provided that the high performance code that you are running is
sensible and cache aware the hit from the video refresh overhead is
barely detectable. The box runs a *lot* cooler without a 3D GPU in.

There is a highly inaccurate assumption. Multicore CPUs are very much
memory bandwidth limited. Read about the memory wall. Once you reach 3

I know perfectly well about the memory contention issues of multicore
machines. Hyperthreading is what gets in the way sometimes leaving the
data caches in a state where pipeline stalls necessarily occur.

or 4, adding CPUs gives diminishing returns for performance. Boost the
memory speed and performance picks up again. Take away memory bandwidth
and the CPU speed falls off as well. The point is why pay hundreds of

Each (virtual) CPU uses about 20% of memory bandwidth flat out on
PC3-12800(800MHz) memory the 3-5% for the video makes no difference to
when the turnover in performance occurs. And any decently written fast
code will be properly cache aware so that the external memory bandwidth
limitations are almost irrelevant since everything is already in cache
apart from the very first access to a new cache line at each level.

It is quite unusual to be hitting things that hard for an extended
period. Vector dot products and large FFTs will, but most ordinary
computing tasks leave plenty of recovery gaps.

dollars for extra CPU speed only to piss it away with an on chip
graphics controller sharing the memory bus?

Because it doesn't have the performance hit that you imagine.

My i7-3770K at stock speed with the above memory 9-9-9-24 has a
geekbench score of 12275 @ 3.5MHz the best are about 13000 so the
performance hit for using the internal video is at most -5%.
(broadly in agreement with the video bandwidth requirements)

In practice most applications do not hammer memory to quite the same
extent as benchmarks or optimised dot product numeric code.

There is no point installing a fancy power hungry 3D graphics card
unless you are actually going to use it via CUDA or similar.

http://www.nvidia.co.uk/object/cuda-parallel-computing-uk.html

The benchmarking game has been played to perfection in the i7.

--
Regards,
Martin Brown
 
On Tue, 04 Nov 2014 22:37:43 -0800, miso <miso@sushi.com> wrote:

Have I used Spice to analyze a resistor divider? Actually yes, but in
finite
element analysis to simulate a laser trim procedure. The basic networks
are
designed by hand.

John Larkin wrote:

Do that if you enjoy it. I

You have no fucking clue what I am talking about. I might as well be talking
to the wall. Have you ever designed a chip where you laser trim thin film
resistors?

OK, you have joined the swearing, insulting, content-free faction.




--

John Larkin Highland Technology, Inc

jlarkin att highlandtechnology dott com
http://www.highlandtechnology.com
 
miso wrote:
Joerg wrote:


Normally on the HD. But not in the DOS days, there I used (part of) an
extra 4MB that I installed for this. RAM-disk should also be possible
under Windows. Like here:

http://blog.laptopmag.com/faster-than-an-ssd-how-to-turn-extra-memory-into-a-ram-disk


I can't speak for stuff they sell at box stores, but all my drives have
cache. [64Mbytes in my case, newer drives use 128Mbytes.] There is zero
reason to do a RAM disk. I'm running a software RAID, so besides the cache,
much of the data is in RAM anyway prior to being written to disk. [The
software RAID is one reason to use error detecting RAM.]

My LTSpice RAW files are substantially larger than that. HD read-write
times are a major slowdown.


The advantage to building it yourself is you know the capabilities of each
component. The disadvantage is is tends to cost more.

That's ok. I just don't want this to turn into a time-consuming science
project.

--
Regards, Joerg

http://www.analogconsultants.com/
 
On 05/11/2014 19:50, Joerg wrote:
miso wrote:
Joerg wrote:

Normally on the HD. But not in the DOS days, there I used (part of) an
extra 4MB that I installed for this. RAM-disk should also be possible
under Windows. Like here:

http://blog.laptopmag.com/faster-than-an-ssd-how-to-turn-extra-memory-into-a-ram-disk

It is only worth doing that if you have vast amounts of memory unused.

You should also consider having an Intel SSD 32GB caching frequently
used files. I have one and it works fairly well when compiling stuff.
The files in use end up in the SSD after the first pass.

They call it Intel Rapid Storage techonology. It sits between your real
hard disk and the application caching most recently used files.

I can't speak for stuff they sell at box stores, but all my drives have
cache. [64Mbytes in my case, newer drives use 128Mbytes.] There is zero
reason to do a RAM disk. I'm running a software RAID, so besides the cache,
much of the data is in RAM anyway prior to being written to disk. [The
software RAID is one reason to use error detecting RAM.]

My LTSpice RAW files are substantially larger than that. HD read-write
times are a major slowdown.

The advantage to building it yourself is you know the capabilities of each
component. The disadvantage is is tends to cost more.

That's ok. I just don't want this to turn into a time-consuming science
project.

It is much easier to buy something off the shelf unless you have time to
burn. Although it does make sense to buy the SSDs separately. Just be
sure to fit them on a SATA3 6GB interface. Way faster than magnetic.

If you want the fastest possible scratch disk a RAID0 array of the
Samsung 840 256GB or 512GB is about as fast as you can get. Others may
benchmark faster by using compression but for incompressible data the
Samsungs or Crucial are still about as good as it gets.

And they don't whine horribly or run hot like the old 7200rpm SCSIs did.

--
Regards,
Martin Brown
 
Martin Brown wrote:
On 05/11/2014 19:50, Joerg wrote:
miso wrote:
Joerg wrote:

Normally on the HD. But not in the DOS days, there I used (part of) an
extra 4MB that I installed for this. RAM-disk should also be possible
under Windows. Like here:

http://blog.laptopmag.com/faster-than-an-ssd-how-to-turn-extra-memory-into-a-ram-disk


It is only worth doing that if you have vast amounts of memory unused.

With 32GB of RAM I'd have gobs of unused memory. Normally it would be
very expensive to have that much memory, one reason why that Costco deal
is enticing. AFAICT it only costs $20 in extra software to turn a large
portion of RAM into a RAM disk.


You should also consider having an Intel SSD 32GB caching frequently
used files. I have one and it works fairly well when compiling stuff.
The files in use end up in the SSD after the first pass.

They call it Intel Rapid Storage techonology. It sits between your real
hard disk and the application caching most recently used files.

Good idea, have to check that out. Never heard of it so far. Do you know
an example, like a link to a product? When I search it all points to
Intel but not computer stores.


I can't speak for stuff they sell at box stores, but all my drives have
cache. [64Mbytes in my case, newer drives use 128Mbytes.] There is zero
reason to do a RAM disk. I'm running a software RAID, so besides the
cache,
much of the data is in RAM anyway prior to being written to disk. [The
software RAID is one reason to use error detecting RAM.]

My LTSpice RAW files are substantially larger than that. HD read-write
times are a major slowdown.

The advantage to building it yourself is you know the capabilities of
each
component. The disadvantage is is tends to cost more.

That's ok. I just don't want this to turn into a time-consuming science
project.

It is much easier to buy something off the shelf unless you have time to
burn. ...

Absolutamente.


... Although it does make sense to buy the SSDs separately. Just be
sure to fit them on a SATA3 6GB interface. Way faster than magnetic.

Or RAM disk. Just have to make sure the power doesn't go out before
shoveling a successful sim run to HD. Although mostly that isn't needed,
I never store RAW files in LTSpice because they'd clog up a backup
system in no time.


If you want the fastest possible scratch disk a RAID0 array of the
Samsung 840 256GB or 512GB is about as fast as you can get. Others may
benchmark faster by using compression but for incompressible data the
Samsungs or Crucial are still about as good as it gets.

And they don't whine horribly or run hot like the old 7200rpm SCSIs did.

I've never liked SCSI.

--
Regards, Joerg

http://www.analogconsultants.com/
 
On 06/11/2014 04:23, Joerg wrote:
Martin Brown wrote:
On 05/11/2014 19:50, Joerg wrote:
miso wrote:
Joerg wrote:

Normally on the HD. But not in the DOS days, there I used (part of) an
extra 4MB that I installed for this. RAM-disk should also be possible
under Windows. Like here:

http://blog.laptopmag.com/faster-than-an-ssd-how-to-turn-extra-memory-into-a-ram-disk


It is only worth doing that if you have vast amounts of memory unused.


With 32GB of RAM I'd have gobs of unused memory. Normally it would be
very expensive to have that much memory, one reason why that Costco deal
is enticing. AFAICT it only costs $20 in extra software to turn a large
portion of RAM into a RAM disk.

Just be careful to make sure that you don't end up thrashing VM.
Not sure the extra ram is that expensive going rate ~$100/8GB here and I
would expect it to be down to about $60 over the pond.

You should also consider having an Intel SSD 32GB caching frequently
used files. I have one and it works fairly well when compiling stuff.
The files in use end up in the SSD after the first pass.

They call it Intel Rapid Storage techonology. It sits between your real
hard disk and the application caching most recently used files.


Good idea, have to check that out. Never heard of it so far. Do you know
an example, like a link to a product? When I search it all points to
Intel but not computer stores.

Some motherboards have it built in. ASUS gaming ones for instance.

http://www.asus.com/uk/Motherboards/Z87IPRO/specifications/

Be hard pressed to tell you which other ones without searching.
It is handy if you regularly run the same code and data again and again.

I can't speak for stuff they sell at box stores, but all my drives have
cache. [64Mbytes in my case, newer drives use 128Mbytes.] There is zero
reason to do a RAM disk. I'm running a software RAID, so besides the
cache,
much of the data is in RAM anyway prior to being written to disk. [The
software RAID is one reason to use error detecting RAM.]

My LTSpice RAW files are substantially larger than that. HD read-write
times are a major slowdown.

The advantage to building it yourself is you know the capabilities of
each
component. The disadvantage is is tends to cost more.

That's ok. I just don't want this to turn into a time-consuming science
project.

It is much easier to buy something off the shelf unless you have time to
burn. ...


Absolutamente.


... Although it does make sense to buy the SSDs separately. Just be
sure to fit them on a SATA3 6GB interface. Way faster than magnetic.


Or RAM disk. Just have to make sure the power doesn't go out before
shoveling a successful sim run to HD. Although mostly that isn't needed,
I never store RAW files in LTSpice because they'd clog up a backup
system in no time.


If you want the fastest possible scratch disk a RAID0 array of the
Samsung 840 256GB or 512GB is about as fast as you can get. Others may
benchmark faster by using compression but for incompressible data the
Samsungs or Crucial are still about as good as it gets.

And they don't whine horribly or run hot like the old 7200rpm SCSIs did.


I've never liked SCSI.

It had its uses. I still have one PC that has classic SCSI on the back
to use with my now ancient once state-of-the-art Nikon slide scanner.

--
Regards,
Martin Brown
 
Martin Brown wrote:
On 06/11/2014 04:23, Joerg wrote:
Martin Brown wrote:
On 05/11/2014 19:50, Joerg wrote:
miso wrote:
Joerg wrote:

Normally on the HD. But not in the DOS days, there I used (part
of) an
extra 4MB that I installed for this. RAM-disk should also be possible
under Windows. Like here:

http://blog.laptopmag.com/faster-than-an-ssd-how-to-turn-extra-memory-into-a-ram-disk



It is only worth doing that if you have vast amounts of memory unused.


With 32GB of RAM I'd have gobs of unused memory. Normally it would be
very expensive to have that much memory, one reason why that Costco deal
is enticing. AFAICT it only costs $20 in extra software to turn a large
portion of RAM into a RAM disk.

Just be careful to make sure that you don't end up thrashing VM.
Not sure the extra ram is that expensive going rate ~$100/8GB here and I
would expect it to be down to about $60 over the pond.

That is surprisingly cheap. Not sure if it was Phil's dealer or another
one but when going into the configuration menu upgrading to 32GB they
showed many hundred Dollars extra.


You should also consider having an Intel SSD 32GB caching frequently
used files. I have one and it works fairly well when compiling stuff.
The files in use end up in the SSD after the first pass.

They call it Intel Rapid Storage techonology. It sits between your real
hard disk and the application caching most recently used files.


Good idea, have to check that out. Never heard of it so far. Do you know
an example, like a link to a product? When I search it all points to
Intel but not computer stores.

Some motherboards have it built in. ASUS gaming ones for instance.

http://www.asus.com/uk/Motherboards/Z87IPRO/specifications/

Be hard pressed to tell you which other ones without searching.
It is handy if you regularly run the same code and data again and again.

Ah, now I understand. It's not a module to plug in but I'd need a whole
different mobo then.

What I am doing is just what you described, running SPICE sims over an
over again with minor circuit changes each time.

[...]

--
Regards, Joerg

http://www.analogconsultants.com/
 
John Larkin wrote:

On Tue, 04 Nov 2014 22:37:43 -0800, miso <miso@sushi.com> wrote:

Have I used Spice to analyze a resistor divider? Actually yes, but in
finite
element analysis to simulate a laser trim procedure. The basic networks
are
designed by hand.

John Larkin wrote:

Do that if you enjoy it. I

You have no fucking clue what I am talking about. I might as well be
talking to the wall. Have you ever designed a chip where you laser trim
thin film resistors?


OK, you have joined the swearing, insulting, content-free faction.
If you had something relevant to contribute, I would comment. However, your
comments were 100% non sequitur.

The laser trimming situation is a case where spice is actually useful for a
voltage divider design. You comment out elements to simulate the laser
taking bite out of thin film. It isn't the kind of situation that could be
solved with simple algebra. The sensitivity of the bite is hard to compute.

With any tool. you need the wisdom to know when to use it.
 
upsidedown@downunder.com wrote:

When using a lot of caching with disk writes, make sure that you have
some kind of UPS, so that operations are actually written to the
physical disk, before the UPS power is lost. Some file systems seem to
corrupt the file system data, if the power is lost at a bad time.

Very true. I have a Triplite true sine double conversion running.
 
Joerg wrote:

miso wrote:
Joerg wrote:


My LTSpice RAW files are substantially larger than that. HD read-write
times are a major slowdown.


The advantage to building it yourself is you know the capabilities of
each component. The disadvantage is is tends to cost more.


That's ok. I just don't want this to turn into a time-consuming science
project.

The ram in a disk drive is used as a FIFO of sorts. [It is way more
complicated that a FIFO in reality.] You are making the assumption that no
writing on the drive is taking place, and that it is all going to the
buffer. That is not the case. Thus the size of the cache on the drive
doesn't need to equate to the size of the output file.

Your getting about 200MBytes/sec on a modern drive. That is 5 seconds a
Gbyte. This shouldn't be an issue.

> http://www.storagereview.com/hgst_4tb_deskstar_nas_hdd_review
 
On a sunny day (Fri, 07 Nov 2014 01:53:24 -0800) it happened miso
<miso@sushi.com> wrote in <m3i4qi$t1o$2@speranza.aioe.org>:

upsidedown@downunder.com wrote:

When using a lot of caching with disk writes, make sure that you have
some kind of UPS, so that operations are actually written to the
physical disk, before the UPS power is lost. Some file systems seem to
corrupt the file system data, if the power is lost at a bad time.

Very true. I have a Triplite true sine double conversion running.

I have a laptop that will continue for several hours if power fails...
:)
 
On Fri, 07 Nov 2014 02:16:36 -0800, miso <miso@sushi.com> wrote:

Joerg wrote:

miso wrote:
Joerg wrote:


My LTSpice RAW files are substantially larger than that. HD read-write
times are a major slowdown.


The advantage to building it yourself is you know the capabilities of
each component. The disadvantage is is tends to cost more.


That's ok. I just don't want this to turn into a time-consuming science
project.


The ram in a disk drive is used as a FIFO of sorts. [It is way more
complicated that a FIFO in reality.]

More likely, the buffer capacity is used to reorganize write requests
so that close by sectors and tracks are written first, i.e. minimize
R/W head movement and do the writing during a single disk rotation.

Originally, this optimization was done by the OS, but the OS needed to
know the physical structure of the disk. When the disk is accessed by
logical block numbers (LBN) and when the disk drive itself performs
e.g. bad block replacement, the OS doesn't know the physical structure
of the disk and hence the disk itself has to perform access
optimization.

Unfortunately, if the power is suddenly lost, some disk structures are
out of date, and hopefully the next startup may be able to fix the
disk.

You are making the assumption that no
writing on the drive is taking place, and that it is all going to the
buffer. That is not the case. Thus the size of the cache on the drive
doesn't need to equate to the size of the output file.

Disk caches helps with multiple files on a badly fragmented disk. I do
not understand what it would help with a single file access on a well
defragmented disk.
 
On Saturday, 8 November 2014 08:54:52 UTC+11, John Larkin wrote:
On Fri, 07 Nov 2014 01:51:04 -0800, miso <miso@sushi.com> wrote:
John Larkin wrote:
On Tue, 04 Nov 2014 22:37:43 -0800, miso <miso@sushi.com> wrote:
John Larkin wrote:

Do that if you enjoy it. I

You have no fucking clue what I am talking about. I might as well be
talking to the wall. Have you ever designed a chip where you laser trim
thin film resistors?

OK, you have joined the swearing, insulting, content-free faction.

If you had something relevant to contribute, I would comment. However, your
comments were 100% non sequitur.

The laser trimming situation is a case where spice is actually useful for a
voltage divider design. You comment out elements to simulate the laser
taking bite out of thin film. It isn't the kind of situation that could be
solved with simple algebra. The sensitivity of the bite is hard to compute.

So, Spice *is* a design tool.

It can be, in the hands of people who do design, as opposed to persistent trial and error.

--
Bill Sloman, Sydney
 
On Fri, 07 Nov 2014 01:51:04 -0800, miso <miso@sushi.com> wrote:

John Larkin wrote:

On Tue, 04 Nov 2014 22:37:43 -0800, miso <miso@sushi.com> wrote:

Have I used Spice to analyze a resistor divider? Actually yes, but in
finite
element analysis to simulate a laser trim procedure. The basic networks
are
designed by hand.

John Larkin wrote:

Do that if you enjoy it. I

You have no fucking clue what I am talking about. I might as well be
talking to the wall. Have you ever designed a chip where you laser trim
thin film resistors?


OK, you have joined the swearing, insulting, content-free faction.




If you had something relevant to contribute, I would comment. However, your
comments were 100% non sequitur.

The laser trimming situation is a case where spice is actually useful for a
voltage divider design. You comment out elements to simulate the laser
taking bite out of thin film. It isn't the kind of situation that could be
solved with simple algebra. The sensitivity of the bite is hard to compute.

So, Spice *is* a design tool.


--

John Larkin Highland Technology, Inc
picosecond timing precision measurement

jlarkin att highlandtechnology dott com
http://www.highlandtechnology.com
 
On Fri, 07 Nov 2014 22:49:10 -0800, miso <miso@sushi.com> Gave us:

upsidedown@downunder.com wrote:

The ram in a disk drive is used as a FIFO of sorts. [It is way more
complicated that a FIFO in reality.]

More likely, the buffer capacity is used to reorganize write requests
so that close by sectors and tracks are written first, i.e. minimize
R/W head movement and do the writing during a single disk rotation.

Originally, this optimization was done by the OS, but the OS needed to
know the physical structure of the disk. When the disk is accessed by
logical block numbers (LBN) and when the disk drive itself performs
e.g. bad block replacement, the OS doesn't know the physical structure
of the disk and hence the disk itself has to perform access
optimization.


That is why I said it isn't exactly a FIFO. ;-) But the effect is similar
in that data is stored on the ram in the drive. You can see benchmarks where
if the file being written is very large and comes from a simple copy
operation, the drive becomes an issue. But in the case of spice, the data is
being computed rather than copied, so if the circuit is anything but
trivial, the write demands shouldn't be the limiting factor.

BTW, as a linux user, I don't defrag. Using ext4, your files begin life
being fragmented.

Hard drive caches are huge and grab nearly entire cylinders of data.

Fragmentation WAS a BIG problem when drives were physically bigger,
and the heads moved slower, and the data trailed out onto longer lineal
strings. Seek times, and head transitions all had a huge cost on the
speed of the data retrieval as well as the longevity of a given drive.

This is no longer the case. The heads only transit less than an inch
across its entire gamut of travel. That happens a lot faster these
days.

That and the caching makes it an issue of the past. I see hard drives
getting hit every second of every day, and they last for years.

I seriously doubt they exhibit any serious access time hinderances due
to 'fragmentation', regardless of the OS.
 
Benchmarking has shown quad channel isn't useful at the moment.

Xeon is not expensive if you get the E3 1200 series.
> http://ark.intel.com/products/series/53495/Intel-Xeon-Processor-E3-1200-Product-Family#@All
Use a Supermicro mobo and use the error detecting correcting ram. I can
swing a dead cat and it would land on a local system builder who could put
that together if you can't do it yourself.
> http://www.supermicro.com/products/motherboard/Xeon/C220/X10SAE.cfm

The E3 1200 chips are Haswell chips that have Xeon tweaks. You get all the
virtuallization. They handle error correcting memory of the UNBUFFERED
variety. That confuses a lot of people. Because it uses unbuffered RAM, it
is limited to 32Gbytes. Xeons that use registed memory can use more ram, but
that ram is slower.

Dell has the E3 1200 V3 in some of their Power Edge products. I have no idea
if they are good PCs since I build my own.

> http://www.tomshardware.com/reviews/xeon-e3-1275-v3-haswell-cpu,3590.html

The Xeon product line is all about stability. No overclocking. They use ECC,
which some say is slower. [I don't know.] If you are seriously going to do a
ram disk (dumb idea), you would want the ECC. For software RAID, you should
have ECC. I give Dell credit for at least using a Supermicro mobo, since
some of the Asus mobos don't use ECC correctly.

The bad news is RAM prices are up for some reason.
 
upsidedown@downunder.com wrote:

The ram in a disk drive is used as a FIFO of sorts. [It is way more
complicated that a FIFO in reality.]

More likely, the buffer capacity is used to reorganize write requests
so that close by sectors and tracks are written first, i.e. minimize
R/W head movement and do the writing during a single disk rotation.

Originally, this optimization was done by the OS, but the OS needed to
know the physical structure of the disk. When the disk is accessed by
logical block numbers (LBN) and when the disk drive itself performs
e.g. bad block replacement, the OS doesn't know the physical structure
of the disk and hence the disk itself has to perform access
optimization.

That is why I said it isn't exactly a FIFO. ;-) But the effect is similar
in that data is stored on the ram in the drive. You can see benchmarks where
if the file being written is very large and comes from a simple copy
operation, the drive becomes an issue. But in the case of spice, the data is
being computed rather than copied, so if the circuit is anything but
trivial, the write demands shouldn't be the limiting factor.

BTW, as a linux user, I don't defrag. Using ext4, your files begin life
being fragmented.
 
On 11/5/2014 1:54 AM, miso wrote:
Phil Hobbs wrote:


Which processors is in there?


It has a pair of AMD Opteron 6128s. I haven't been keeping up, but 3
years ago the Magny Cours Opterons ran rings around the Intel offerings
for floating point.

Cheers

Phil Hobbs


I was a big AMD fan, but Intel has trumped them. It isn't even a contest
today.

I will say that the AMD CPUs have better memory management, so they do
multitask a little better, but that can't save them on today's market.

I delayed building this Xeon PC hoping AMD would get their act together, but
I gave up.

I heard a salesman saying the AMDs are faster on floating point. Anyone
heard that from a reliable source, like benchmarks maybe?

--

Rick
 

Welcome to EDABoard.com

Sponsor

Back
Top