Jihad needs scientists

In article <erpmth$c02$1@blue.rahul.net>,
kensmith@green.rahul.net (Ken Smith) wrote:
In article <erpaqn$8ss_004@s934.apx1.sbo.ma.dialup.rcn.com>,
jmfbahciv@aol.com> wrote:
In article <ermuv0$rph$4@blue.rahul.net>,
kensmith@green.rahul.net (Ken Smith) wrote:
In article <ermmos$8qk_002@s774.apx1.sbo.ma.dialup.rcn.com>,
jmfbahciv@aol.com> wrote:
[....]
I use electronic banking. I go to the banks web site and do it. It is
just another "surfing the web" case. I don't have any special software
to
do it. I am far from the normal user but even I didn't add anything
beyond the web browser to do my banking.

Since you have already converted to on-line banking, why are
you disputing my statements about it?

I am disputing your incorrect statements.

You cannot know what is incorrect because you've already been
herded into doing online banking.

You are completely off your nut on this.

Not at all. You are already herded into the corral. You will
never experience the pressure that will push the rest into
that pen.

It isn't a corral. A corral implies a loss of freedom. I can still write
a check or see a teller if I want.
For now.

I can pay a bill while I'm at work of
on vacation. I have lost nothing.
You have lost the physical paper trail. Doesn't it bother you
that electronic checks can be applied against your account without
any physical permission written by you?


/BAH
 
In article <erppm2$c02$5@blue.rahul.net>,
kensmith@green.rahul.net (Ken Smith) wrote:
In article <erpb68$8qk_001@s934.apx1.sbo.ma.dialup.rcn.com>,
jmfbahciv@aol.com> wrote:
snip

Computer usage
by the general population requires more than this. You keep
looking at the load balance from a naive user POV.

No, you are just making stuff up because you've been shown to be wrong
about the real world of computers today.

Keep thinking like that and you'll never learn something.

From you! You have been shown to be wrong on this subject.
You think I'm wrong only becaues you don't have my knowledge.
Do you also think that people, who are experts in fields not
your own, are also wrong?
<snip>

/BAH
 
In article <4f62c$45e0c567$cdd08488$22846@DIALUPUSA.NET>,
"nonsense@unsettled.com" <nonsense@unsettled.com> wrote:
Ken Smith wrote:
In article <erpb68$8qk_001@s934.apx1.sbo.ma.dialup.rcn.com>,
jmfbahciv@aol.com> wrote:

In article <ermu1l$rph$2@blue.rahul.net>,
kensmith@green.rahul.net (Ken Smith) wrote:

In article <ermmhd$8qk_001@s774.apx1.sbo.ma.dialup.rcn.com>,
jmfbahciv@aol.com> wrote:

In article <erhn0i$em5$4@blue.rahul.net>,
kensmith@green.rahul.net (Ken Smith) wrote:

[.....]

Sure you can. If the computer is running a printer server, you can
predict the right order for the files to be read by the server. If
there
is a task constantly running to take sliding billion point FFTs, you
know
what is best for the FFT part. Just because the human may change
something it doesn't mean they change everything.

All of this is single-task, single user thinking.

No, it isn't. It is taking a practical example of a way that real
multiuser systems actually run.

I know of plenty OSes and how they actually ran. We even made
them go.


Then you should know that I am correct in what I am saying about the real
usage.


It is very common for a small fraction of
the tasks to need the large fraction of the memory. This is just the way
it is in the real world.

That all depends on the computer site and who the users are.


Everything "depends", but 99.9999% of the cases are like that. There are
very few where the jobs are evenly spread.



The computer that is doing the work of posting this is a multiuser
machine. It has me on it using only several tens of kilobytes.

GAG> That's too much.


That's what I am using. "pico" is smallish and there is a little overhead
from "bash".


Computer usage
by the general population requires more than this. You keep
looking at the load balance from a naive user POV.

No, you are just making stuff up because you've been shown to be wrong
about the real world of computers today.

Keep thinking like that and you'll never learn something.


From you! You have been shown to be wrong on this subject.




I'm using a company that sells computer time like a timesharing company.
They also sell modem access, and data storage. This is the modern
business model for this sort of company.

And that is one business.


There are a great many like it now. There are also a lot of internet ISP
companies. They have the same sort of usage profile.


[....]

You only think that because you haven't stopped to think about what I
wrote. We were discussing the case where swapping had to happen. There
is no point in asking at this point if it needs to happen because the
argument has already shown that it must. There is more data to be
maintained than will fit into the RAM of the machine. There is no
question about swapping being needed. The discussion is about the
advantages of having the code specific to the large memory usage make
informed choices about swapping.

You are not talking about swapping; you are talking about the
working set of pages. You do NOT have to swap code if the
storage disk is as fast as the swapping disk.


What the devil are you talking about? You were sort of making sense until
you got to this. The "swapping" under discussion is between the swap
volume and the physical RAM. The swap volume can never be anything like
as fast as the RAM. A VM system makes it appear that there is more RAM
than is physically there by using the swap volume.

Do you think that computers still use drum storage or mumble tanks for the
memory.

It could just be her shorthand
You are correct. In the biz, we use the word "core" like
the word "kleenix" is used. It is meant to distinguish
between "fast memory" and the less fast memory. The way
Ken has been using the word RAM is to reference the "fast
memory".

but she still talks about
"core" which I remember well, and differing speeds of
hard drives, diskpacks, and so on. I wonder if she is still
using an 80ms full sized hard drive on her home system.
I don't know the specs of my disk.
That being said, a great deal of what she has been writing
attaches to really elementary computer and OS design which,
yes.

offhand, reading both of you going at it, she seems to
understand better. It seems to me you're a level or few away
from the sots of internals she worked with during her career.
Yup. I haven't been able to communicate this to Ken. I've
given up on that MP guy.


Most of those essentials haven't changed all that much.
They haven't. The computer biz is simply reinventing what
we implemented 25 and 35 years ago. It's not only annoying
but boring; the biz should have been further ahead.

AFAIK
the linux systems we're running continue to organize the hard
drives much as early Unix organized tape magnetic storage.
Certainly that was true as recently as 5-7 years ago, but I
haven't messed with Linux on those levels in some time so
that *might* have changed though I see no reason why it
should have. (That and a buck will get you a cup of coffee.)
It's still similar. I never liked the holey file implementation,
but that is my taste in methods of nailing bits to the iron.

/BAH
 
In article <erpov3$c02$3@blue.rahul.net>,
kensmith@green.rahul.net (Ken Smith) wrote:
In article <erpfvv$8qk_001@s934.apx1.sbo.ma.dialup.rcn.com>,
jmfbahciv@aol.com> wrote:
In article <ermvfo$rph$5@blue.rahul.net>,
kensmith@green.rahul.net (Ken Smith) wrote:
In article <ermm1f$8qk_001@s774.apx1.sbo.ma.dialup.rcn.com>,
jmfbahciv@aol.com> wrote:
In article <era3ti$tvp$6@blue.rahul.net>,
kensmith@green.rahul.net (Ken Smith) wrote:
[.....]
The problem is that the software side
of the biz is dragging its heels and the hardware types are
getting away with not including assists to the OS guys.

The most hardware guys have to design hardware to run with Windows.

Sigh! Windows' OS has a grandfather that required those hardware
assists. Your point is making an excuse for lousy design
requirements.

No, I am pointing out what has really happened. Windows was written to
run on the usual DOS hardware. Gradually, features got added until it
needed more stuff than DOS. If I designed a 4096 by 4096 display, I
wouldn't sell very many unless I could make a windows driver for it.


[.....]
Even just suggesting that there be true backups of peoples machines
throws
them into a panic.

Good. That's why you should just say the words. That will have
a ripple effect throughout the industry.

No, I need my computer to work and be backed up. I don't give the hind
most 3/4 of a rat what happens to the average windows user's data.

I know that you don't care. I do care. That is why you don't
understand about all kinds of computing usage and I do.

You are assuming that I don't know about things I don't care about this is
a serious error on your part. I know that there are many people out there
who have not yet seen the light and still run Windows. I know that these
people are doomed to lose valuable data at some time in the future. I
know that fixing this will require some software that gets around things
Windows does. I don't run Windows. I run Linux. As a result, I want to
back up my data on a Linux box. I also want to protect my self from the
bad effects of Windows losing data on someone else's machine. This is why
I raise the issue.
And you keep assuming, erroneously, that this type of usage is the
majority of computing in the world. It is not. I am trying to
talk about the day when everybody has to have a computer to do any
financial transactions.

"Imagine an evil person gets to the PC, deletes all
the files of that user and reformats the harddisk on the machine. How
long would it take to put it all back as a working system?" has been the
question I have asked.

Instead of saying evil person, just say lightning strike or power
surge or blizzard/hurricane when everything shuts down for 2 weeks.

That is a lot less damage than an evil person can cause. Backing up by
storing on two computers will serve to protect against lightning.

No it won't. There a billions of dollars spent on trying to
make one set of computing services non-local.

Either, you just lack imagination about what an evil person can do or you
over estimate the problem caused by something like a lightning strike. An
evil person can destroy any copy on any machine he has the ability to
write to. This means that he can delete all the data on the remote
machines too. This is why you need a write only memory in the system.
This subject is too complex to discuss without some basic computing
knowledge. You don't seem to have that specialized knowledge. I've
spent man-years on these kinds of problems.

[.....]
On just a sinlge PC it is quite easy.

No, it is not. The way files, directories, structures and MOST
importantly, data has been organized makes it almost impossible
to manage files on a PC.

We are talking about a backup. You can just copy the entire hard disk
when it is a single PC.

That is not a backup of the files.

YOu seem to be talking about a bit-to-bit copy. That will also
copy errors which don't exist on the output device.

I am talking of a complete and total and correct image of the drive.
I know you are. A complete and total and correct image of the
drive will also include its bad spots. It is possible (and
likely) that the reason you are rebuilding your system is becaues
a bad spot happened on a crucial point of the file system. The
you are describing will simply restore the problem that wiped
out your disk.


It
is a bit by bit copy. Usually it is stored onto a larger drive without
compression. If something goes bad, you can "loop back and mount" the
image. This gives you a read only exact copy of the file system as it
was. You then can simply fix the damaged file system.
Now go back to my reply ^up there^. You have a flaw in your
backup strategy.
[....]
That's called an incremental backup. Great care needs to occur
to combine full and incremental backups.

No great amount of care is needed. I've done that sort of restore a few
times with no great trouble. Since files are stored with the modification
date, a copy command that checks dates does the hard part.

You are very inexperience w.r.t. this computing task.

You seem to be claiming knowledge you don't have.
I am not claiming; it is a fact that I have the knowledge..and
extensive work experience.
It is not
as easy as you make it out to be.

It in fact can be easier. I knew someone who wrote a lot of the software
used by banks and insurance companies. They stored the data transaction
by transaction, daily and incrementals, monthly near full backups and
yearly total backups. The system for recovery was very well tested and
automated. After every software change, they had to requalify the code.
This meant restoring an old back up and making a new one and restoring
that. I assume that software like that is still the common practice.
It's even more complicated. I yak daily with a guy who does this work.
Now think about that fact
and all the people who are going to be doing all banking online.

It doesn't matter if you bank on line or in person. If you bank's
computers fail, you can't do a transaction. If they lose all their
computer data, you will have a devil of a time getting at your money.
This is why I always try to keep more than one bank, a couple of credit
cards and some cash. I know that there is some risk that a bank may have
a windows machine connected to the important information.
Your backup strategy for this type of computing is mulitple copies.
Most people don't have enough money to maintain multiple accounts.
Most people don't check their single account activity; having
many accounts will not solve this problem but mutiply instances
of it. To use your stategy, you have to keep up with your backup
maintenance for many accounts rather than one. Every bank's timing
is different. This is not a solution.

/BAH
 
In article <9at0u29qgk8bq246lpi6ahft8o68qna00v@4ax.com>,
MassiveProng <MassiveProng@thebarattheendoftheuniverse.org> wrote:
On Sat, 24 Feb 07 13:53:03 GMT, jmfbahciv@aol.com Gave us:

We are talking about a backup. You can just copy the entire hard disk
when it is a single PC.

That is not a backup of the files.

You're an idiot! A mirror of your drive volume whether kept on site
or off, IS a backup, you total twit!
A mirror, as you are using the term, is a second copy of
the disk, not the files.
YOu seem to be talking about a bit-to-bit copy.

You are stupid. The mode does NOT matter.
It matters greatly. There are pitfalls in each kind of strategy.

The finished copy is ALL that matters.
No. If you have copied the hiccup that caused the disk wipeout,
the finished copy also contains the problem.

That will also
copy errors which don't exist on the output device.

You are too thick.

Do you even know what an incremental backup is?
Yes. I maintained and developed the code we supplied on
our OS distribution tapes.


It starts by making a FULL backup. From that point on, each
additional backup done to the volume only adds those files that have
changed. The volume written to follows all the standard FILE SYSTEM
rules for one file being copied over another of the same name.
Oh, good grief. You have a serious design flaw here if you are
overwriting you backup copies.

Again, you need to BONE UP.
I am boning up. I'm learning how pitiful the computing biz
is out there in the Real World. It's got serious problems.

/BAH
 
In article <ers25a$8qk_002@s1016.apx1.sbo.ma.dialup.rcn.com>,
jmfbahciv@aol.com says...
In article <erpp2s$c02$4@blue.rahul.net>,
kensmith@green.rahul.net (Ken Smith) wrote:
In article <erpam2$8ss_002@s934.apx1.sbo.ma.dialup.rcn.com>,
jmfbahciv@aol.com> wrote:
In article <ermtbj$rph$1@blue.rahul.net>,
kensmith@green.rahul.net (Ken Smith) wrote:
In article <ermofh$8qk_003@s774.apx1.sbo.ma.dialup.rcn.com>,
jmfbahciv@aol.com> wrote:
In article <er4i05$1ln$7@blue.rahul.net>,
kensmith@green.rahul.net (Ken Smith) wrote:
In article <er47qv$8qk_001@s897.apx1.sbo.ma.dialup.rcn.com>,
jmfbahciv@aol.com> wrote:
[.....]
NT was written in the first place for a processor that didn't do
interrupts well.

Nuts. If the hardware doesn't do it, then you can make the software
do it. As TW used to say, "A small matter of programming".

On the N10 there was no way to code around it. The hardware was designed
so that it had to walk to the breakroom and back before it picked up the
phone. Nothing you could say over the phone would help.



The N10 AKA 860 processor had to spill its entire
pipeline when interrupted. This slowed things down a lot when the code
involved interrupts. When the project was moved back to the X86 world,
it
was marketed as secure ... well sort of .... well kind of .... its
better
than 98. I don't think a lot of time was spent on improving the
interrupt
performance.

You are confusing delivery of computing services by software with
delivery of computing services of hardware.

No, hardware sets the upper limit on what software can do.

That all depends on who is doing the coding.

If a CPU chip needs 1 hour to do a an add instruction, you can't make it
go faster by anything you code. Like I said it sets the upper limit on
the performance.

Sigh! If an ADD takes an hour and the computation has to be done
in less time, then you don't use the ADD instruction. You do
the addition by hand.

In other words: You need another CPU to do the operation.

Not at all. You can arithmetic by hand.

No amount of
fancy code on a machine that takes an hour per instruction will fix it.

This is what I have been trying to explain to you about the hardware
setting the upper limit on performance.

Sigh! The IBM 1620 had no arithmetic instructions. Arithmetic
was done "by hand" by looking up table entries.
Yep! The 1620 was known as the CADET (Can't Add, Didn't Even Try).

--
Keith
 
jmfbahciv@aol.com wrote:

Doesn't it bother you that electronic checks can be applied against your account
without
any physical permission written by you?
You mean a debit ?

I could hardly live without it.

Graham
 
In article <ers25a$8qk_002@s1016.apx1.sbo.ma.dialup.rcn.com>,
<jmfbahciv@aol.com> wrote:
In article <erpp2s$c02$4@blue.rahul.net>,
kensmith@green.rahul.net (Ken Smith) wrote:
[......]
If a CPU chip needs 1 hour to do a an add instruction, you can't make it
go faster by anything you code. Like I said it sets the upper limit on
the performance.

Sigh! If an ADD takes an hour and the computation has to be done
in less time, then you don't use the ADD instruction. You do
the addition by hand.

In other words: You need another CPU to do the operation.

Not at all. You can arithmetic by hand.
You will say anything to avoid admitting that you missed the point when I
said that the hardware is what sets the upper limit on the performance.

No amount of
fancy code on a machine that takes an hour per instruction will fix it.

This is what I have been trying to explain to you about the hardware
setting the upper limit on performance.

Sigh! The IBM 1620 had no arithmetic instructions. Arithmetic
was done "by hand" by looking up table entries.
..... and that set a limit on the performance didn't it.
--
--
kensmith@rahul.net forging knowledge
 
In article <4f62c$45e0c567$cdd08488$22846@DIALUPUSA.NET>,
nonsense@unsettled.com <nonsense@unsettled.com> wrote:
Ken Smith wrote:
[...]
You are not talking about swapping; you are talking about the
working set of pages. You do NOT have to swap code if the
storage disk is as fast as the swapping disk.


What the devil are you talking about? You were sort of making sense until
you got to this. The "swapping" under discussion is between the swap
volume and the physical RAM. The swap volume can never be anything like
as fast as the RAM. A VM system makes it appear that there is more RAM
than is physically there by using the swap volume.

Do you think that computers still use drum storage or mumble tanks for the
memory.

It could just be her shorthand but she still talks about
"core" which I remember well, and differing speeds of
hard drives, diskpacks, and so on. I wonder if she is still
using an 80ms full sized hard drive on her home system.
It was "high speed" drum drives that were used for swap space in the
distant past. They were much faster than the disk drives of the era.


That being said, a great deal of what she has been writing
attaches to really elementary computer and OS design which,
offhand, reading both of you going at it, she seems to
understand better. It seems to me you're a level or few away
from the sots of internals she worked with during her career.
She doesn't have the grasp of hardware and when she tries to get into that
area, she doesn't realize that she is outside her area of knowledge.
Remember that most of this has been about device drivers and VM
implementations etc. In these areas you have a large insection between
the hardware and software.

[....]
Most of those essentials haven't changed all that much. AFAIK
the linux systems we're running continue to organize the hard
drives much as early Unix organized tape magnetic storage.
Do you mean the hardware or the logical content. In either case you are
wrong about how things are done on most Linux boxes today. The Reiser
file system is what is used for the logical contents. The hardware is
typically SATA.

The partitioning is still as it was in DOS days partly because the Linux
folks want to be able to work with DOS/Windows drives.


--
--
kensmith@rahul.net forging knowledge
 
In article <ers2eb$8qk_004@s1016.apx1.sbo.ma.dialup.rcn.com>,
<jmfbahciv@aol.com> wrote:
In article <erppm2$c02$5@blue.rahul.net>,
kensmith@green.rahul.net (Ken Smith) wrote:
In article <erpb68$8qk_001@s934.apx1.sbo.ma.dialup.rcn.com>,
jmfbahciv@aol.com> wrote:
snip

Computer usage
by the general population requires more than this. You keep
looking at the load balance from a naive user POV.

No, you are just making stuff up because you've been shown to be wrong
about the real world of computers today.

Keep thinking like that and you'll never learn something.

From you! You have been shown to be wrong on this subject.

You think I'm wrong only becaues you don't have my knowledge.
Do you also think that people, who are experts in fields not
your own, are also wrong?
It appears that what you call knowledge is in fact a bunch of
misconceptions about hardware issues. You have made several statements
that are simply false on the subject. You may know things about the
coding of some old OSes but you are out of your depth when you try to talk
about hardware.

--
--
kensmith@rahul.net forging knowledge
 
In article <errvc2$8ss_003@s1016.apx1.sbo.ma.dialup.rcn.com>,
<jmfbahciv@aol.com> wrote:
In article <erpqdl$c02$6@blue.rahul.net>,
kensmith@green.rahul.net (Ken Smith) wrote:
In article <erpeao$8qk_001@s934.apx1.sbo.ma.dialup.rcn.com>,
jmfbahciv@aol.com> wrote:
In article <ermuqc$rph$3@blue.rahul.net>,
kensmith@green.rahul.net (Ken Smith) wrote:
In article <ermlh8$8ss_012@s774.apx1.sbo.ma.dialup.rcn.com>,
jmfbahciv@aol.com> wrote:
In article <erhnfn$em5$5@blue.rahul.net>,
kensmith@green.rahul.net (Ken Smith) wrote:
In article <erhd4t$8qk_001@s916.apx1.sbo.ma.dialup.rcn.com>,
jmfbahciv@aol.com> wrote:
In article <i70nt25k4ubuvllr029cun9ebu1e1bng0a@4ax.com>,
MassiveProng <MassiveProng@thebarattheendoftheuniverse.org> wrote:
On Mon, 19 Feb 07 13:29:06 GMT, jmfbahciv@aol.com Gave us:

Not at all. OSes were handling the above problems in the 60s.
The reason virtual memory was invented was to solve the above
problem.

The swapping, in this case, CAUSES the interference.

That has to do with severe memory management problems.

No, it is just part of the overhead of doing VM.

Nope. VM doesn't have to swap. Swapping is done to make
room so a memory request by another task can be serviced.

You said "nope" and then confirmed that my statement was correct.
Swapping needs to happen if you need more virtual RAM than there is real
RAM. To be able to swap, the code for doing the swapping must always be
in the real RAM. As a result, there is code overhead in having a VM
system. There is also a speed overhead when swapping happens. The OS
uses some amount of CPU time on the taks switching needed to make VM do
the swap.

VM isn't swapping. VM allows the OS to manage smaller chunks
of memory rather than segments.

That is completely and totally worng. "Virtual memory" means quite
literally "memory that is not real".

No. It is memory whose addressing is larger than available physical
memory.
No, not only the addressing appears larger. The total memory appears to
be more. Merely allowing an address space that is larger is merely
address translation. You only get into virtual memory when it appears the
programs as though the machine has more memory than there is physical RAM.
This is exactly what I was telling you when I directed you to how the word
"virtual" is defined.


MVT360 managed memory on a IIRC 4K
page basis. This certainly did not qualify as a virtual memory system.


I don't know why but people often confuse virtual memory
addressing with swapping. The two are separate.

No, you are confused on the issue of needing to swap.

No, I'm not. You are.

It seems that your confusion runs much deeper than that. You seem not to
understand what "virtual" means.

I know how OSes consider it. I was there when it was first implemented
on our architectures. JMF, my other half, did the work.
You have confused address translation with virtual memory. You have been
trying to argue hardware issues you don't understand. Just because you
where standing in the same room and saw something happen doesn't mean you
understand it.

--
--
kensmith@rahul.net forging knowledge
 
In article <ers29b$8qk_003@s1016.apx1.sbo.ma.dialup.rcn.com>,
<jmfbahciv@aol.com> wrote:
In article <erpmth$c02$1@blue.rahul.net>,
kensmith@green.rahul.net (Ken Smith) wrote:
[.....]
It isn't a corral. A corral implies a loss of freedom. I can still write
a check or see a teller if I want.

For now.
.... and thus it will remain.

I can pay a bill while I'm at work of
on vacation. I have lost nothing.

You have lost the physical paper trail. Doesn't it bother you
that electronic checks can be applied against your account without
any physical permission written by you?
The physical permission can be forged more easily than the electronic one.
When it gets to the bank, they do all the work electronically. As a
result, whether I do on line banking or not, the actual work is done
electronically. If the security in the bank and broken, not using on line
banking will not protect me.


--
--
kensmith@rahul.net forging knowledge
 
In article <45E1B4F4.1B7C7FA2@hotmail.com>,
Eeyore <rabbitsfriendsandrelations@hotmail.com> wrote:
jmfbahciv@aol.com wrote:

Doesn't it bother you that electronic checks can be applied against
your account
without
any physical permission written by you?

You mean a debit ?

I could hardly live without it.
Even if you did try, at the bank the check causes an electronic transfer
of money. These days, the checks don't travel. It has been a long time
since physical money went from bank to bank in reaction to a check.

--
--
kensmith@rahul.net forging knowledge
 
In article <errvlm$8ss_004@s1016.apx1.sbo.ma.dialup.rcn.com>,
<jmfbahciv@aol.com> wrote:
In article <erpnd5$c02$2@blue.rahul.net>,
kensmith@green.rahul.net (Ken Smith) wrote:
[....]
No, if the copy checks the dates, you can load the backups in any order.
What you do in practice is mount the complete backup and then the newest
incremental. You then mount the previous incremental and then the one
before that.

This is one way to do a full restore. Note that it may also restore
the cause of the problem.
It only restores things to as they were. It doesn't fix any buggy code in
the process. This is as much as you can ask of a restore. Repair
software is another issue.


If your software is any good, it will let you know when you can stop. All
that's needed is to record the dates on all of the files. A fairly simple
script can tell you if you have more to do.

Software cannot tell you if the file you want is no longer on
storage.
What the heck do you mean by that? Obviously software can tell you if a
file exists or not. All it needs is a list of all the files that do
exist.


Another method was to do a full backup save each day. This will
work until you find that you lost a source file sometime in the
last 12 years. Now how do you find the last save of that file?

This is not a problem in practice if the copy is smart about dates.

AFAIK, only our system had enough dates stored in each file's
RIB (retrieval information block) that could do this.
On a Linux machine, there is enough information to do it.


The usual practice is to do a full backup every so often and incremental
ones in between.

Yes but only for static storage. This will not cover data
that is transaction-based.
Yes it does cover transaction based data. Take the example of banking
information. The account balances as of, lets say, midnight are stored.
From that point forwards, you have the transaction records. The
transaction records for a given account contains not just the movement of
the money but other information such as the new total. In this case one
needs only look back in time for each account to the last time there was a
break in the transactions. In a real time system, when you are doing
rapid transactions, the totals are always out of date. The first
transaction after a break, has a correct total.



--
--
kensmith@rahul.net forging knowledge
 
Ken Smith wrote:

jmfbahciv@aol.com> wrote:
kensmith@green.rahul.net (Ken Smith) wrote:
jmfbahciv@aol.com> wrote:

VM isn't swapping. VM allows the OS to manage smaller chunks
of memory rather than segments.

That is completely and totally worng. "Virtual memory" means quite
literally "memory that is not real".

No. It is memory whose addressing is larger than available physical
memory.

No, not only the addressing appears larger. The total memory appears to
be more. Merely allowing an address space that is larger is merely
address translation. You only get into virtual memory when it appears the
programs as though the machine has more memory than there is physical RAM.
This is exactly what I was telling you when I directed you to how the word
"virtual" is defined.
To the processor itself the VM should be transparent. It should 'look' and
behave like acres of RAM. A good example of where the such a task should be
offloaded from the CPU itself.

Graham
 
Ken Smith wrote:

Eeyore <rabbitsfriendsandrelations@hotmail.com> wrote:
jmfbahciv@aol.com wrote:

Doesn't it bother you that electronic checks can be applied against
your account without any physical permission written by you?

You mean a debit ?

I could hardly live without it.

Even if you did try, at the bank the check causes an electronic transfer
of money. These days, the checks don't travel. It has been a long time
since physical money went from bank to bank in reaction to a check.
Back in 1971 when I opened my first bank account, they still posted you the paid
cheques. That soon disappeared.

Electronic debits are invaluable. I just signed up to a telecoms provider whose
call charges are insanely cheap. They won't accept cheques and stuff. It all has
to be done electronically.

http://www.call1899.co.uk/index2.php#

I'm still having some trouble believing this. Landline calls inside the UK are 4
pence regardless of duration ! Calling the USA / Canada / France / Germany /
Singapore even ! costs 1p per minute plus 4p connection charge.

Graham
 
Eeyore wrote:
Ken Smith wrote:

Eeyore <rabbitsfriendsandrelations@hotmail.com> wrote:
jmfbahciv@aol.com wrote:

Doesn't it bother you that electronic checks can be applied against
your account without any physical permission written by you?
You mean a debit ?

I could hardly live without it.
Even if you did try, at the bank the check causes an electronic transfer
of money. These days, the checks don't travel. It has been a long time
since physical money went from bank to bank in reaction to a check.

Back in 1971 when I opened my first bank account, they still posted you the paid
cheques. That soon disappeared.

Electronic debits are invaluable. I just signed up to a telecoms provider whose
call charges are insanely cheap. They won't accept cheques and stuff. It all has
to be done electronically.

http://www.call1899.co.uk/index2.php#

I'm still having some trouble believing this. Landline calls inside the UK are 4
pence regardless of duration ! Calling the USA / Canada / France / Germany /
Singapore even ! costs 1p per minute plus 4p connection charge.

Graham

I use Skype for all international and local calling these days.
1.2p anywhere.
It's esp valuable for business calling those 800 numbers in the US -
they're free.

--
Dirk

http://www.onetribe.me.uk - The UK's only occult talk show
Presented by Dirk Bruere and Marc Power on ResonanceFM 104.4
http://www.resonancefm.com
 
In article <ers3rf$8qk_001@s1016.apx1.sbo.ma.dialup.rcn.com>,
<jmfbahciv@aol.com> wrote:
In article <erpov3$c02$3@blue.rahul.net>,
kensmith@green.rahul.net (Ken Smith) wrote:
[.....]
You are assuming that I don't know about things I don't care about this is
a serious error on your part. I know that there are many people out there
who have not yet seen the light and still run Windows. I know that these
people are doomed to lose valuable data at some time in the future. I
know that fixing this will require some software that gets around things
Windows does. I don't run Windows. I run Linux. As a result, I want to
back up my data on a Linux box. I also want to protect my self from the
bad effects of Windows losing data on someone else's machine. This is why
I raise the issue.

And you keep assuming, erroneously, that this type of usage is the
majority of computing in the world. It is not.
Yes, it is. Look at how many homes have PCs in them today. This is the
big market for computers today. It massively out weights the business
usage.


I am trying to
talk about the day when everybody has to have a computer to do any
financial transactions.
You are changing the subject to the future. In fact your transactions do
require a computer. It is the one at the bank and not yours however.


Either, you just lack imagination about what an evil person can do or you
over estimate the problem caused by something like a lightning strike. An
evil person can destroy any copy on any machine he has the ability to
write to. This means that he can delete all the data on the remote
machines too. This is why you need a write only memory in the system.

This subject is too complex to discuss without some basic computing
knowledge. You don't seem to have that specialized knowledge. I've
spent man-years on these kinds of problems.
You are attempting to get out of discussing an issue because you know that
you have already made enough errors in the area to discredit everything
you say. You claim a lot of knowledge. Your knowledge is from a very
narrow base. You also claim to have spent "man years" this doesn't mean
you got it right or even that you know anything. It just means you spent
a lot of time.


[....]
YOu seem to be talking about a bit-to-bit copy. That will also
copy errors which don't exist on the output device.

I am talking of a complete and total and correct image of the drive.

I know you are. A complete and total and correct image of the
drive will also include its bad spots. It is possible (and
likely) that the reason you are rebuilding your system is becaues
a bad spot happened on a crucial point of the file system. The
you are describing will simply restore the problem that wiped
out your disk.
It does the restore. The repair is another issue. Putting the system
back as it was in the first step.


It
is a bit by bit copy. Usually it is stored onto a larger drive without
compression. If something goes bad, you can "loop back and mount" the
image. This gives you a read only exact copy of the file system as it
was. You then can simply fix the damaged file system.

Now go back to my reply ^up there^. You have a flaw in your
backup strategy.
No, I don't. You have confused doing a repair with doing a restore. The
restore method I suggested is correct. If you now want to discuss the new
topic of repair, then we can begin that topic.


No great amount of care is needed. I've done that sort of restore a few
times with no great trouble. Since files are stored with the modification
date, a copy command that checks dates does the hard part.

You are very inexperience w.r.t. this computing task.

You seem to be claiming knowledge you don't have.

I am not claiming; it is a fact that I have the knowledge..and
extensive work experience.
You have also made claims about hardware issues, that are easy to prove to
be false.

[....]
It in fact can be easier. I knew someone who wrote a lot of the software
used by banks and insurance companies. They stored the data transaction
by transaction, daily and incrementals, monthly near full backups and
yearly total backups. The system for recovery was very well tested and
automated. After every software change, they had to requalify the code.
This meant restoring an old back up and making a new one and restoring
that. I assume that software like that is still the common practice.

It's even more complicated. I yak daily with a guy who does this work.

I doubt that it has become seriously more complex. The issues all existed
at that time. The amount of data is all that has increased not the
complexity of the question.

[....]
It doesn't matter if you bank on line or in person. If you bank's
computers fail, you can't do a transaction. If they lose all their
computer data, you will have a devil of a time getting at your money.
This is why I always try to keep more than one bank, a couple of credit
cards and some cash. I know that there is some risk that a bank may have
a windows machine connected to the important information.

Your backup strategy for this type of computing is mulitple copies.
Yes, muliple copies of the data in one form or another is what you need.
The information must be stored more than once if you expect to be able to
put back the data that has been lost. There is no way around this. Error
correcting codes are just ways of storing the information more than once
so even the storage systems and modern RAM chips do this.


Most people don't have enough money to maintain multiple accounts.
Most people can do it. You don't need to put a lot of money into a bank
to have an account there. With most banks, just having had an account for
a while will get you some form of loan on just your say so. Overdraft
protection is the common loan situation.


Most people don't check their single account activity; having
many accounts will not solve this problem but mutiply instances
of it.
It protects against the mere failure of the bank's computer. This can
strand you.

To use your stategy, you have to keep up with your backup
maintenance for many accounts rather than one. Every bank's timing
is different. This is not a solution.
It solves the problem of failure. Evil activity is solved by checking the
balances etc. There are two problems that must be covered. You ignore
one and don't assume I've already thought of how to solve the other.

--
--
kensmith@rahul.net forging knowledge
 
In article <45E1CD23.26249F55@hotmail.com>,
Eeyore <rabbitsfriendsandrelations@hotmail.com> wrote:
[....]
No, not only the addressing appears larger. The total memory appears to
be more. Merely allowing an address space that is larger is merely
address translation. You only get into virtual memory when it appears the
programs as though the machine has more memory than there is physical RAM.
This is exactly what I was telling you when I directed you to how the word
"virtual" is defined.

To the processor itself the VM should be transparent. It should 'look' and
behave like acres of RAM. A good example of where the such a task should be
offloaded from the CPU itself.
No, that isn't done. VM systems are also usually multitaskers. You could
create one that isn't but the rule is that they are. Here's how it the
operation breaks down in a multitask environment.

- Running Task A
- Task A does a page fault on the real memory
- OS gets an interrupt
- Perhaps some checking is done here
- OS looks for the page to swap out
- Complex issue of priority on swapping skipped here.
- OS marks the outgoing page to be not usable
- OS starts swap actions going
- OS looks for a task that can run now
- OS remembers some stuff about task priorities
- OS switches to new context
- Task B runs
- Swap action completes
- OS gets interrupt
- OS marks the new page as ready to go
- OS checks the task priority information
- OS maybe switches tasks
- Task A or B runs depending on what OS decided.


This way, a lower priority task can do useful stuff while we wait for the
pages to swap.




--
--
kensmith@rahul.net forging knowledge
 
Ken Smith wrote:

In article <4f62c$45e0c567$cdd08488$22846@DIALUPUSA.NET>,
nonsense@unsettled.com <nonsense@unsettled.com> wrote:

Ken Smith wrote:

[...]

You are not talking about swapping; you are talking about the
working set of pages. You do NOT have to swap code if the
storage disk is as fast as the swapping disk.


What the devil are you talking about? You were sort of making sense until
you got to this. The "swapping" under discussion is between the swap
volume and the physical RAM. The swap volume can never be anything like
as fast as the RAM. A VM system makes it appear that there is more RAM
than is physically there by using the swap volume.

Do you think that computers still use drum storage or mumble tanks for the
memory.

It could just be her shorthand but she still talks about
"core" which I remember well, and differing speeds of
hard drives, diskpacks, and so on. I wonder if she is still
using an 80ms full sized hard drive on her home system.


It was "high speed" drum drives that were used for swap space in the
distant past. They were much faster than the disk drives of the era.



That being said, a great deal of what she has been writing
attaches to really elementary computer and OS design which,
offhand, reading both of you going at it, she seems to
understand better. It seems to me you're a level or few away

from the sots of internals she worked with during her career.

She doesn't have the grasp of hardware and when she tries to get into that
area, she doesn't realize that she is outside her area of knowledge.
Remember that most of this has been about device drivers and VM
implementations etc. In these areas you have a large insection between
the hardware and software.

[....]

Most of those essentials haven't changed all that much. AFAIK
the linux systems we're running continue to organize the hard
drives much as early Unix organized tape magnetic storage.


Do you mean the hardware or the logical content. In either case you are
wrong about how things are done on most Linux boxes today. The Reiser
file system is what is used for the logical contents. The hardware is
typically SATA.

The partitioning is still as it was in DOS days partly because the Linux
folks want to be able to work with DOS/Windows drives.
Looks like Dennis Ritchie doesn't remember.

http://en.wikipedia.org/wiki/Inode

ReiserFS isn't universal among Unix/Linux systems.

Reiser has been arrested for murdering his wife.
http://www.ninareiser.com/

The FS may be at its end.

See also http://www.ontrack.com/special/200501-LinuxReiserFS.aspx
 

Welcome to EDABoard.com

Sponsor

Back
Top