EDAboard.com | EDAboard.de | EDAboard.co.uk | WTWH Media

OT: science, technology engineering, mathematic and medical

Ask a question - edaboard.com

elektroda.net NewsGroups Forum Index - Electronics Design - OT: science, technology engineering, mathematic and medical

Goto page Previous  1, 2, 3, 4, 5, 6, 7  Next


Guest

Thu Feb 07, 2019 11:45 am   



On Thursday, 7 February 2019 03:28:19 UTC, bill....@ieee.org wrote:
Quote:
On Thursday, February 7, 2019 at 3:21:33 AM UTC+11, tabby wrote:
On Wednesday, 6 February 2019 13:39:25 UTC, Martin Brown wrote:
On 06/02/2019 12:00, tabbypurr wrote:
On Wednesday, 6 February 2019 10:38:46 UTC, Martin Brown wrote:
On 06/02/2019 06:33, John Robertson wrote:

Was this peer reviewed? No one READ it before it was published, this is
on the front page.

Peer reviewed doesn't guarantee quality.

understatement of the century there.

You are *way* too cynical and paranoid.

It has always been the case that about 10% of everything in the peer
reviewed literature is not to put to fine a point on it wrong.

In the one medical subject I have some in-depth knowledge of, 99.9% is wrong. In medicine generally, the figure is 90 something percent.


fact-free rubbish snipped

Quote:
The whole purpose of the scientific publication process is that once
something is published other researchers can repeat the same experiment
and either confirm or refute the claims made by the first group.

I assumed we all knew what peer review is. It's a nice idea but there are some issues with it in practice:
1. Research is routinely done for profit, and sponsoring companies inevitably pay researchers that give them the best results. It takes no genius to work out how that goes.

Industrial research is routinely done for profit (or as precaution against future loss). Academic research is largely motivated by a desire to get publications in high prestige journals, and citations for the stuff that gets published.


both of which lead to similar pressures

> Pharmacy companies don't normally publish negative results, but that's the only obvious distortion in the process. Academics also find it hard to publish negative results.

heh. There speaks the clueless.

Quote:
2. Others can redo the experiment but seldom do unless paid to, which in most cases they aren't. When they are paid to they're under the profit motive, which encourages an awful lot of overlooking & more.

Only some of them are influenced by the profit motive. A large chunk of the motivation for publication is to get noticed - even in profit-driven industry.


both of which...

Quote:
Lying to get noticed does happen

https://en.wikipedia.org/wiki/Diederik_Stapel

but getting found out has catastrophic consequences.


sometimes. Unfortunately nowadays research is routinely accepted from people that have been found fiddling things beforehand - and even research where fiddling has been found within it. This is one of the failings of NICE.


Quote:
3. IRL when people spot problems, the normal response is not to publish a criticism. This occurs for a few reasons, including
a) I have plenty other things to do

True.

b) Criticising others is likely to get what I publish criticised

False. An irritated author may react with a counter-blast, but that's another citation. "There is no such thing as bad publicity".P.T. Barnum.


It's one of the concerns that stops people. Whether it's a correct concern makes no difference in practice.

Quote:
Sloman A. W. “Comment on ‘A versatile thermoelectric temperature controller with 10 mK reproducibility and 100 mK absolute accuracy’ [Rev. Sci. Instrum. 80, 126107 (2009)] “, Review of Scientific Instruments 82, 27101 - 027101-2 (2011).

c) people working in the field but not having phd qualifications usually think their voice won't be heard.

A Ph.D. is a remarkably narrow qualification. Mine is in Physical Chemistry, but my publications are entirely within the instrumentation literature.. Editors couldn't care less whether you have a Ph.D. and everybody (you excepted) seems to know that.


the point is that there are many who work in fields eg nurses who see the reality entirely at odds with research day in day out. Their findings don't get published.

Quote:
Great idea, but it doesn't work as well as one would hope.

If better than any other idea that anybody has come up with.


I see you're unable to read a response before replying as well as short on clues about research.

Quote:
What works best? Studies of very large numbers of people over many years where the author has no connection with their treatment and is not sponsored by interested parties. You've got much higher sample numbers, much longer study lengths & as much as practical of the money motive is removed. Imho such data gathering should be automatic across the board for any developed nation's health service. It doesn't solve all the problems but it's a lot better.

Cochrane collaboration.

https://en.wikipedia.org/wiki/Cochrane_(organisation)

It dates back to 1993, and a lot of what it does are meta-analyses of lots of data collected by people with an economic interest in knowing what happens to patients.

NT is absolutely right - for once - in saying that such data-gathering should be built into any developed nation's health service, but it has only recently become a practical option, and privacy issues do complicate the process.


The need to ask patients for permission does not stop the process at all.


Quote:
https://www.myhealthrecord.gov.au/

Ultimately one needs to be realistic about medical research. It's an inherently shall we say messy field, and believing what one is told is generally naive.


more sillyboll snipped


Just a few to start with:

Why Most Published Research Findings Are False
http://www.plosmedicine.org/article/info:doi/10.1371/journal.pmed.0020124

A slew of problems:
http://fixingpsychology.blogspot.com/2013/01/holiday-special-year-of-scandals-2012.html

http://journals.cambridge.org/action/displayAbstract?fromPage=online&aid=6577844

a team led by University of Virginia’s Brian Nosek repeated 100 psychological experiments and found that only 36% of originally “significant” (in the statistical sense) results were replicated.
https://www.theatlantic.com/science/archive/2015/08/psychology-studies-reliability-reproducability-nosek/402466/

etc etc etc etc etc etc etc etc.
Those links only look at some aspects of the problem. There are more major problems elsewhere in the process.


NT


Guest

Thu Feb 07, 2019 11:45 am   



On Thursday, 7 February 2019 03:29:06 UTC, Clifford Heath wrote:
Quote:
On 7/2/19 12:56 pm, tabbypurr wrote:
On Thursday, 7 February 2019 00:54:20 UTC, Clifford Heath wrote:
On 7/2/19 3:21 am, tabbypurr wrote:
On Wednesday, 6 February 2019 13:39:25 UTC, Martin Brown wrote:
On 06/02/2019 12:00, tabbypurr wrote:
On Wednesday, 6 February 2019 10:38:46 UTC, Martin Brown wrote:
On 06/02/2019 06:33, John Robertson wrote:

Was this peer reviewed? No one READ it before it was published, this is
on the front page.

Peer reviewed doesn't guarantee quality.

understatement of the century there.

You are *way* too cynical and paranoid.

It has always been the case that about 10% of everything in the peer
reviewed literature is not to put to fine a point on it wrong.

In the one medical subject I have some in-depth knowledge of, 99.9% is wrong. In medicine generally, the figure is 90 something percent.

Every description of reality is wrong. Some are just less wrong.
Peer review is one way to start sorting out which.

I just wish it were effective in practice. The world would be a better place.

I'm glad I wasn't born a century ago.

I get to live twice as long, and ten times as well.

That seems pretty effective to me.

Clifford Heath.


It's an advance for sure. Due to a mixture of things: medical research, finanial development, the time to put various improvments in place, developments in car design, all sorts of things. Obviously medical research has brought positive results, but it's been a very miss & sometimes hit path. Now that we can do better, we need to.


NT

Clifford Heath
Guest

Thu Feb 07, 2019 11:45 am   



On 7/2/19 9:01 pm, tabbypurr_at_gmail.com wrote:
Quote:
On Thursday, 7 February 2019 03:29:06 UTC, Clifford Heath wrote:
On 7/2/19 12:56 pm, tabbypurr wrote:
On Thursday, 7 February 2019 00:54:20 UTC, Clifford Heath wrote:
On 7/2/19 3:21 am, tabbypurr wrote:
On Wednesday, 6 February 2019 13:39:25 UTC, Martin Brown wrote:
On 06/02/2019 12:00, tabbypurr wrote:
On Wednesday, 6 February 2019 10:38:46 UTC, Martin Brown wrote:
On 06/02/2019 06:33, John Robertson wrote:

Was this peer reviewed? No one READ it before it was published, this is
on the front page.

Peer reviewed doesn't guarantee quality.

understatement of the century there.

You are *way* too cynical and paranoid.

It has always been the case that about 10% of everything in the peer
reviewed literature is not to put to fine a point on it wrong.

In the one medical subject I have some in-depth knowledge of, 99.9% is wrong. In medicine generally, the figure is 90 something percent.

Every description of reality is wrong. Some are just less wrong.
Peer review is one way to start sorting out which.

I just wish it were effective in practice. The world would be a better place.

I'm glad I wasn't born a century ago.

I get to live twice as long, and ten times as well.

That seems pretty effective to me.

Clifford Heath.

It's an advance for sure. Due to a mixture of things: medical research, finanial development, the time to put various improvments in place, developments in car design, all sorts of things. Obviously medical research has brought positive results, but it's been a very miss & sometimes hit path. Now that we can do better, we need to.


You misinterpret. All the other things became possible because *people
live longer* because medicine and basic hygiene stopped them dying young.

When everyone died at 50-60, we didn't take the time to even get
properly educated - not if we wanted to see our grand-children. So we
certainly couldn't do the other things too.

Clifford Heath.


Guest

Thu Feb 07, 2019 12:45 pm   



On Thursday, February 7, 2019 at 8:58:14 PM UTC+11, tabb...@gmail.com wrote:
Quote:
On Thursday, 7 February 2019 03:28:19 UTC, bill....@ieee.org wrote:
On Thursday, February 7, 2019 at 3:21:33 AM UTC+11, tabby wrote:
On Wednesday, 6 February 2019 13:39:25 UTC, Martin Brown wrote:
On 06/02/2019 12:00, tabbypurr wrote:
On Wednesday, 6 February 2019 10:38:46 UTC, Martin Brown wrote:
On 06/02/2019 06:33, John Robertson wrote:

Was this peer reviewed? No one READ it before it was published, this is
on the front page.

Peer reviewed doesn't guarantee quality.

understatement of the century there.

You are *way* too cynical and paranoid.

It has always been the case that about 10% of everything in the peer
reviewed literature is not to put to fine a point on it wrong.

In the one medical subject I have some in-depth knowledge of, 99.9% is wrong. In medicine generally, the figure is 90 something percent.

fact-free rubbish snipped


Any comments on NT's opinions are fact-free - as are the opinions themselves.

NT wants us to thing that he has superior insights, but he's careful not to tell us where he thinks his insights are superior.

Quote:
The whole purpose of the scientific publication process is that once
something is published other researchers can repeat the same experiment
and either confirm or refute the claims made by the first group.

I assumed we all knew what peer review is. It's a nice idea but there are some issues with it in practice:
1. Research is routinely done for profit, and sponsoring companies inevitably pay researchers that give them the best results. It takes no genius to work out how that goes.

Industrial research is routinely done for profit (or as precaution against future loss). Academic research is largely motivated by a desire to get publications in high prestige journals, and citations for the stuff that gets published.

both of which lead to similar pressures


Cravings for money or prestige both lead to bad behaviour, but rather different sorts of bad behaviour.

Quote:
Pharmacy companies don't normally publish negative results, but that's the only obvious distortion in the process. Academics also find it hard to publish negative results.

heh. There speaks the clueless.


NT hasn't had to listen to academics complaining about not being able to publish negative results. He imagines himself to be clueful, but this does seem to be one more of his ego-boosting delusions.

Quote:
2. Others can redo the experiment but seldom do unless paid to, which in most cases they aren't. When they are paid to they're under the profit motive, which encourages an awful lot of overlooking & more.

Only some of them are influenced by the profit motive. A large chunk of the motivation for publication is to get noticed - even in profit-driven industry.

both of which...


Look the same to NT, who doeshave these ego-boosting delusions.

Quote:
Lying to get noticed does happen

https://en.wikipedia.org/wiki/Diederik_Stapel

but getting found out has catastrophic consequences.

sometimes. Unfortunately nowadays research is routinely accepted from people that have been found fiddling things beforehand - and even research where fiddling has been found within it. This is one of the failings of NICE.


Example?

Quote:
3. IRL when people spot problems, the normal response is not to publish a criticism. This occurs for a few reasons, including
a) I have plenty other things to do

True.

b) Criticising others is likely to get what I publish criticised

False. An irritated author may react with a counter-blast, but that's another citation. "There is no such thing as bad publicity".P.T. Barnum.

It's one of the concerns that stops people. Whether it's a correct concern makes no difference in practice.


All sorts of things are used as excuses to not doing something. Critical publications do happen anyway.

> > Sloman A. W. “Comment on ‘A versatile thermoelectric temperature controller with 10 mK reproducibility and 100 mK absolute accuracy’ [Rev. Sci. Instrum. 80, 126107 (2009)] “, Review of Scientific Instruments 82, 27101 - 027101-2 (2011).

This showed up here before it got into the Review of Scientific Instruments.. I don't remember feeling a moments hesitation about submitting the comment.

Quote:
c) people working in the field but not having phd qualifications usually think their voice won't be heard.

A Ph.D. is a remarkably narrow qualification. Mine is in Physical Chemistry, but my publications are entirely within the instrumentation literature. Editors couldn't care less whether you have a Ph.D. and everybody (you excepted) seems to know that.

the point is that there are many who work in fields eg nurses who see the reality entirely at odds with research day in day out. Their findings don't get published.


The reality they see may not be of the kind that is susceptible to publication, which does depend of reproducible effects.

Quote:
Great idea, but it doesn't work as well as one would hope.

If better than any other idea that anybody has come up with.

I see you're unable to read a response before replying as well as short on clues about research.


The Cochrane collaboration is just the peer-reviewed scientific method being forced on a medical establishment that much preferred a more feudal system in which senior people - with ideas that could be quite as silly as NT's - could write what they liked and get it published because they were senior figures.

Quote:
What works best? Studies of very large numbers of people over many years where the author has no connection with their treatment and is not sponsored by interested parties. You've got much higher sample numbers, much longer study lengths & as much as practical of the money motive is removed. Imho such data gathering should be automatic across the board for any developed nation's health service. It doesn't solve all the problems but it's a lot better.

Cochrane collaboration.

https://en.wikipedia.org/wiki/Cochrane_(organisation)

It dates back to 1993, and a lot of what it does are meta-analyses of lots of data collected by people with an economic interest in knowing what happens to patients.

NT is absolutely right - for once - in saying that such data-gathering should be built into any developed nation's health service, but it has only recently become a practical option, and privacy issues do complicate the process.

The need to ask patients for permission does not stop the process at all.


It patients can opt out at random it shrinks the pool of subjects, and could bias it in unpredictable ways

> > https://www.myhealthrecord.gov.au/

Which does offer the opt-out option.

Quote:
Ultimately one needs to be realistic about medical research. It's an inherently shall we say messy field, and believing what one is told is generally naive.

more sillyboll snipped


Just a few to start with:

Why Most Published Research Findings Are False
http://www.plosmedicine.org/article/info:doi/10.1371/journal.pmed.0020124


It's a rather extreme claim, based a rather particular application of the word "false".

Quote:


I've already cited Diederik_Stapel. The US is bigger than the Netherlands and a few more rogues are to be expected. It isn't exactly proof that everything that is published is fraudulent or even "false".

> http://journals.cambridge.org/action/displayAbstract?fromPage=online&aid=6577844

Peer review does look at timeliness as well as content. Resubmitting old articles isn't probably a good measure. The filed odes move on.

Quote:
a team led by University of Virginia’s Brian Nosek repeated 100 psychological experiments and found that only 36% of originally “significant” (in the statistical sense) results were replicated.
https://www.theatlantic.com/science/archive/2015/08/psychology-studies-reliability-reproducability-nosek/402466/


They got to pick the 100 psychological experiments they were trying to replicate. Nobody would waste time replicating the the McGurk Effect

https://en.wikipedia.org/wiki/McGurk_effect

or anything else that is spectacularly reliable.

> etc etc etc etc etc etc etc etc.

As if NT wasn't already scraping to bottom of the barrel.

There is the point that psychology uses humans as experimental animals, and they aren't good test subjects. One of my friends swore off human subjects as soon as she could and moved over to bats, who were much more reproducible.

> Those links only look at some aspects of the problem. There are more major problems elsewhere in the process.

None of which NT will be able to identify.

--
Bill Sloman, Sydney


Guest

Thu Feb 07, 2019 1:45 pm   



On Thursday, February 7, 2019 at 9:01:17 PM UTC+11, tabb...@gmail.com wrote:
Quote:
On Thursday, 7 February 2019 03:29:06 UTC, Clifford Heath wrote:
On 7/2/19 12:56 pm, tabbypurr wrote:
On Thursday, 7 February 2019 00:54:20 UTC, Clifford Heath wrote:
On 7/2/19 3:21 am, tabbypurr wrote:
On Wednesday, 6 February 2019 13:39:25 UTC, Martin Brown wrote:
On 06/02/2019 12:00, tabbypurr wrote:
On Wednesday, 6 February 2019 10:38:46 UTC, Martin Brown wrote:
On 06/02/2019 06:33, John Robertson wrote:

Was this peer reviewed? No one READ it before it was published, this is
on the front page.

Peer reviewed doesn't guarantee quality.

understatement of the century there.

You are *way* too cynical and paranoid.

It has always been the case that about 10% of everything in the peer
reviewed literature is not to put to fine a point on it wrong.

In the one medical subject I have some in-depth knowledge of, 99.9% is wrong. In medicine generally, the figure is 90 something percent.

Every description of reality is wrong. Some are just less wrong.
Peer review is one way to start sorting out which.

I just wish it were effective in practice. The world would be a better place.

I'm glad I wasn't born a century ago.

I get to live twice as long, and ten times as well.

That seems pretty effective to me.

Clifford Heath.

It's an advance for sure. Due to a mixture of things: medical research, financial development, the time to put various improvements in place, developments in car design, all sorts of things. Obviously medical research has brought positive results, but it's been a very miss & sometimes hit path. Now that we can do better, we need to.


And the reason we now do it better is that medical research has been - to some extent - taken out of the hands of doctors who are trained (for their own psychological health) to make up their minds quickly, and not ratiocinate about the process after the event.

The scientific method depends on the idea that you could be getting something wrong - and might just have killed a few patients in consequence - so the people who do that sort of thinking have to be at some remove from the sharp end.

Big studies, spread over lots of doctors and even more patients, help get that kind of depression-avoiding separation.

--
Bill Sloman, Sydney


Guest

Fri Feb 08, 2019 11:45 am   



On Thursday, 7 February 2019 10:33:01 UTC, Clifford Heath wrote:
Quote:
On 7/2/19 9:01 pm, tabbypurr wrote:
On Thursday, 7 February 2019 03:29:06 UTC, Clifford Heath wrote:
On 7/2/19 12:56 pm, tabbypurr wrote:
On Thursday, 7 February 2019 00:54:20 UTC, Clifford Heath wrote:
On 7/2/19 3:21 am, tabbypurr wrote:
On Wednesday, 6 February 2019 13:39:25 UTC, Martin Brown wrote:
On 06/02/2019 12:00, tabbypurr wrote:
On Wednesday, 6 February 2019 10:38:46 UTC, Martin Brown wrote:
On 06/02/2019 06:33, John Robertson wrote:

Was this peer reviewed? No one READ it before it was published, this is
on the front page.

Peer reviewed doesn't guarantee quality.

understatement of the century there.

You are *way* too cynical and paranoid.

It has always been the case that about 10% of everything in the peer
reviewed literature is not to put to fine a point on it wrong.

In the one medical subject I have some in-depth knowledge of, 99.9% is wrong. In medicine generally, the figure is 90 something percent.

Every description of reality is wrong. Some are just less wrong.
Peer review is one way to start sorting out which.

I just wish it were effective in practice. The world would be a better place.

I'm glad I wasn't born a century ago.

I get to live twice as long, and ten times as well.

That seems pretty effective to me.

Clifford Heath.

It's an advance for sure. Due to a mixture of things: medical research, finanial development, the time to put various improvments in place, developments in car design, all sorts of things. Obviously medical research has brought positive results, but it's been a very miss & sometimes hit path. Now that we can do better, we need to.

You misinterpret. All the other things became possible because *people
live longer* because medicine and basic hygiene stopped them dying young.

When everyone died at 50-60, we didn't take the time to even get
properly educated - not if we wanted to see our grand-children. So we
certainly couldn't do the other things too.

Clifford Heath.


Obviously there are a bunch of factors, of which living longer is one. More efficient practices is another leading to shorter working weeks. Hard to advance much when almost 100% of the population is working excessive hours in fields growing crops. How you can get 'you misinterpret' from that I don't know.


NT


Guest

Fri Feb 08, 2019 12:45 pm   



On Thursday, 7 February 2019 11:33:57 UTC, bill....@ieee.org wrote:
Quote:
On Thursday, February 7, 2019 at 8:58:14 PM UTC+11, tabby wrote:
On Thursday, 7 February 2019 03:28:19 UTC, bill....@ieee.org wrote:
On Thursday, February 7, 2019 at 3:21:33 AM UTC+11, tabby wrote:
On Wednesday, 6 February 2019 13:39:25 UTC, Martin Brown wrote:
On 06/02/2019 12:00, tabbypurr wrote:
On Wednesday, 6 February 2019 10:38:46 UTC, Martin Brown wrote:
On 06/02/2019 06:33, John Robertson wrote:

Was this peer reviewed? No one READ it before it was published, this is
on the front page.

Peer reviewed doesn't guarantee quality.

understatement of the century there.

You are *way* too cynical and paranoid.

It has always been the case that about 10% of everything in the peer
reviewed literature is not to put to fine a point on it wrong.

In the one medical subject I have some in-depth knowledge of, 99.9% is wrong. In medicine generally, the figure is 90 something percent.

fact-free rubbish snipped


more fact--free rubbish snipped

Quote:
The whole purpose of the scientific publication process is that once
something is published other researchers can repeat the same experiment
and either confirm or refute the claims made by the first group.

I assumed we all knew what peer review is. It's a nice idea but there are some issues with it in practice:
1. Research is routinely done for profit, and sponsoring companies inevitably pay researchers that give them the best results. It takes no genius to work out how that goes.

Industrial research is routinely done for profit (or as precaution against future loss). Academic research is largely motivated by a desire to get publications in high prestige journals, and citations for the stuff that gets published.

both of which lead to similar pressures

Cravings for money or prestige both lead to bad behaviour, but rather different sorts of bad behaviour.


both result in pressure to get a result. The same methods get used in both cases.


Quote:
Pharmacy companies don't normally publish negative results, but that's the only obvious distortion in the process. Academics also find it hard to publish negative results.

heh. There speaks the clueless.

NT hasn't had to listen to academics complaining about not being able to publish negative results. He imagines himself to be clueful, but this does seem to be one more of his ego-boosting delusions.


whoosh

Quote:
2. Others can redo the experiment but seldom do unless paid to, which in most cases they aren't. When they are paid to they're under the profit motive, which encourages an awful lot of overlooking & more.

Only some of them are influenced by the profit motive. A large chunk of the motivation for publication is to get noticed - even in profit-driven industry.

both of which...

Look the same to NT, who doeshave these ego-boosting delusions.


both of which lead to the same result

Quote:
Lying to get noticed does happen

https://en.wikipedia.org/wiki/Diederik_Stapel

but getting found out has catastrophic consequences.

sometimes. Unfortunately nowadays research is routinely accepted from people that have been found fiddling things beforehand - and even research where fiddling has been found within it. This is one of the failings of NICE..

Example?


c) people working in the field but not having phd qualifications usually think their voice won't be heard.

A Ph.D. is a remarkably narrow qualification. Mine is in Physical Chemistry, but my publications are entirely within the instrumentation literature. Editors couldn't care less whether you have a Ph.D. and everybody (you excepted) seems to know that.

the point is that there are many who work in fields eg nurses who see the reality entirely at odds with research day in day out. Their findings don't get published.

The reality they see may not be of the kind that is susceptible to publication, which does depend of reproducible effects.


They're simply not in a position to publish their findings.


Quote:
Great idea, but it doesn't work as well as one would hope.

If better than any other idea that anybody has come up with.

I see you're unable to read a response before replying as well as short on clues about research.

The Cochrane collaboration is just the peer-reviewed scientific method being forced on a medical establishment that much preferred a more feudal system in which senior people - with ideas that could be quite as silly as NT's - could write what they liked and get it published because they were senior figures.


whoosh

Quote:
NT is absolutely right - for once - in saying that such data-gathering should be built into any developed nation's health service, but it has only recently become a practical option, and privacy issues do complicate the process.

The need to ask patients for permission does not stop the process at all.

It patients can opt out at random it shrinks the pool of subjects, and could bias it in unpredictable ways


a percentage opting out is not a problem


Quote:
Ultimately one needs to be realistic about medical research. It's an inherently shall we say messy field, and believing what one is told is generally naive.

more sillyboll snipped


Just a few to start with:

Why Most Published Research Findings Are False
http://www.plosmedicine.org/article/info:doi/10.1371/journal.pmed.0020124

It's a rather extreme claim, based a rather particular application of the word "false".

A slew of problems:
http://fixingpsychology.blogspot.com/2013/01/holiday-special-year-of-scandals-2012.html

I've already cited Diederik_Stapel. The US is bigger than the Netherlands and a few more rogues are to be expected. It isn't exactly proof that everything that is published is fraudulent or even "false".


there's a whole lot more on those pages than Mr. Stapel.

Quote:
http://journals.cambridge.org/action/displayAbstract?fromPage=online&aid=6577844

Peer review does look at timeliness as well as content. Resubmitting old articles isn't probably a good measure. The filed odes move on.

a team led by University of Virginia’s Brian Nosek repeated 100 psychological experiments and found that only 36% of originally “significant” (in the statistical sense) results were replicated.
https://www.theatlantic.com/science/archive/2015/08/psychology-studies-reliability-reproducability-nosek/402466/

They got to pick the 100 psychological experiments they were trying to replicate. Nobody would waste time replicating the the McGurk Effect

https://en.wikipedia.org/wiki/McGurk_effect

or anything else that is spectacularly reliable.

etc etc etc etc etc etc etc etc.

As if NT wasn't already scraping to bottom of the barrel.

There is the point that psychology uses humans as experimental animals, and they aren't good test subjects. One of my friends swore off human subjects as soon as she could and moved over to bats, who were much more reproducible.

Those links only look at some aspects of the problem. There are more major problems elsewhere in the process.

None of which NT will be able to identify.


I identified some in my first undergraduate research project. It became apparent that I could choose to interpret the data either way I wanted, by emphasising different factors & choosing to eliminate differing data due to some nonobvious issues I could choose to either notice or not notice. None of the articles I've flagged even touch that stuff. And there's no point wasting my time talking with you about them, or indeed anything. You're lost in your own ego & bs.


NT


Guest

Fri Feb 08, 2019 12:45 pm   



On Thursday, 7 February 2019 11:45:27 UTC, bill....@ieee.org wrote:
Quote:
On Thursday, February 7, 2019 at 9:01:17 PM UTC+11, tabby wrote:
On Thursday, 7 February 2019 03:29:06 UTC, Clifford Heath wrote:
On 7/2/19 12:56 pm, tabbypurr wrote:
On Thursday, 7 February 2019 00:54:20 UTC, Clifford Heath wrote:

Peer review is one way to start sorting out which.

I just wish it were effective in practice. The world would be a better place.

I'm glad I wasn't born a century ago.

I get to live twice as long, and ten times as well.

That seems pretty effective to me.

Clifford Heath.

It's an advance for sure. Due to a mixture of things: medical research, financial development, the time to put various improvements in place, developments in car design, all sorts of things. Obviously medical research has brought positive results, but it's been a very miss & sometimes hit path. Now that we can do better, we need to.

And the reason we now do it better is that medical research has been - to some extent - taken out of the hands of doctors who are trained (for their own psychological health) to make up their minds quickly, and not ratiocinate about the process after the event.

The scientific method depends on the idea that you could be getting something wrong - and might just have killed a few patients in consequence - so the people who do that sort of thinking have to be at some remove from the sharp end.

Big studies, spread over lots of doctors and even more patients, help get that kind of depression-avoiding separation.


It's one reason certainly. Scary that we agree on something. There have also been many advances made in less rigorous ways - those also have their place.


NT

Martin Brown
Guest

Fri Feb 08, 2019 12:45 pm   



On 07/02/2019 00:54, whit3rd wrote:
Quote:
On Wednesday, February 6, 2019 at 5:39:25 AM UTC-8, Martin Brown wrote:

It has always been the case that about 10% of everything in the peer
reviewed literature is not to put to fine a point on it wrong.

The whole purpose of the scientific publication process is that once
something is published other researchers can repeat the same experiment
and either confirm or refute the claims made by the first group.

Refutation of claims, or confirmation by repeating observations, IS peer review,


Yes. But the peer review prior to publication consists of sending the
draft out to a handful of experts in the field and asking for comments.
These can range from helpful suggestions to downright rude insults
depending on the quality of the paper and the mood of the reviewer.

Quote:
just like the first-cut oversight of editors and reviewers before publication.
We've seen modern reviews of Gregor Mendel's statistics, for instance...
Peer review is open-ended, continuous, and perhaps eternal.


Plenty of junk is weeded out without ever being published. Some
prestigious journals are rather careful about what they publish.

A sad example from history is the poor Russian chemist Belousov in 1951
who discovered the canonical oscillating redox reaction that was so
counter intuitive to everything chemists believed at the time that his
paper was rejected out of hand and the simple recipe he gave untested.

https://en.wikipedia.org/wiki/Belousov%E2%80%93Zhabotinsky_reaction

Belousov was posthumously awarded the Lenin medal for his work but gave
up science because nobody would believe him or look at his ground
breaking work. Eventually a student Zhabotinsky was given it as a
project and repeated the experiments successfully getting them
publicised via a meeting in Prague. When news reached the West the trick
was a favourite for schools lectures as a chemical clock that went
tick-tock tick-tock for quite a while. Until then they only went "tick".

--
Regards,
Martin Brown

Martin Brown
Guest

Fri Feb 08, 2019 1:45 pm   



On 06/02/2019 16:43, John Larkin wrote:
Quote:
On Wed, 6 Feb 2019 02:56:43 -0500, bitrex <user_at_example.net> wrote:

On 02/05/2019 07:49 PM, bill.sloman_at_ieee.org wrote:
Today's Proceedings of the (US) National Academy of Sciences has a second interesting paper

https://www.pnas.org/content/pnas/116/6/1910.full.pdf

I'm not even sure that I should have labelled this post off-topic.


Cerebral-type engineers tend to find the arts and humanities deeply
terrifying; they've examined the various artifacts those disciplines
produce and can't determine their function. why anyone would expend so
much effort to produce useless things is deeply mysterious and inscrutable.

They usually do it for fame and money. Nothing inscrutable about that.


Very few artists get rich unless some random oligarch takes a real fancy
to their output. Comparatively few make a decent living. Some who are
now very famous names scraped along barely surviving from day to day.

Only the best (or strictly most sought after) arists become rich and
famous and the majority (but not all) of them have done something
noteworthy to have gained that following and acclaim.

There are a few though that I would not give houseroom to.

Quote:
What's interesting (but not mysterious) is why they sometimes get fame
and money.


Some novel stuff clearly does involve real creativity and innovation.

OTOH a full ash tray or an unmade bed is basically just trying it on to
see what you can sell your trash for once you have a famous name.

Quote:
My concern about "art" is that anyone can call himself an "artist",
and that no art critic ever dares to say "that's bad and ugly."


You must read the wrong art critics. I have seen some pretty savage
reviews of bad modern art.

Quote:
So, art becomes basically meaningless, random neural activity. But the
"artists" still demand respect and cheap rent.


Not really. I think good artists actually share a lot in common with old
school pcb layout guys in using spatial visualisation and imagination.

I have seen some really crap art but most of it was OK (and I did go
fairly regularly to the Royal Academy summer exhibition for a while).
The worst in terms of being vastly overpriced were some very badly made
neon signs by a certain famous for being infamous modern artist.

--
Regards,
Martin Brown

Martin Brown
Guest

Fri Feb 08, 2019 2:45 pm   



On 07/02/2019 05:16, John Larkin wrote:
Quote:
On Thu, 7 Feb 2019 11:54:12 +1100, Clifford Heath
no.spam_at_please.net> wrote:

On 7/2/19 3:21 am, tabbypurr_at_gmail.com wrote:
On Wednesday, 6 February 2019 13:39:25 UTC, Martin Brown wrote:
On 06/02/2019 12:00, tabbypurr wrote:
On Wednesday, 6 February 2019 10:38:46 UTC, Martin Brown
wrote:
On 06/02/2019 06:33, John Robertson wrote:

Was this peer reviewed? No one READ it before it was
published, this is on the front page.

Peer reviewed doesn't guarantee quality.

understatement of the century there.

You are *way* too cynical and paranoid.

It has always been the case that about 10% of everything in the
peer reviewed literature is not to put to fine a point on it
wrong.

In the one medical subject I have some in-depth knowledge of,
99.9% is wrong. In medicine generally, the figure is 90 something
percent.


It doesn't agree with your prejudices - that is entirely different.

Quote:
Every description of reality is wrong. Some are just less wrong.
Peer review is one way to start sorting out which.

Someone recently suggested that there should be journals that
publish failed experiments. That makes enormous sense.


Some of the most famous experiments ever have been null results.

The Michelson-Morely experiment to measure the ether drift failed
miserably to detect any ether at all. But it was a massive triumph in
experimental excellence an a prelude to relativity.

Likewise for the Eotvos experiment to look for a difference between
inertial and gravitational mass. Again a null result.

Just because the experiment didn't give the result that people were
expecting doesn't mean it failed. It is the experiments that gave
results that refute the existing paradigm which are remembered forever.

BTW we would be up to the eyeballs in worthless reports of failure to
reproduce the infamous Fleischmann & Pons cold fusion experiment by now
if everyone who tried it wrote just one A4 report.

> Peer review would be fun too.

Publishing complete junk does no-one any good.

Ensuring that all the data obtained in medical trials is available for
inspection makes good sense otherwise there is a tendency to cherry pick
only those trials which show what the researchers want to see.

--
Regards,
Martin Brown


Guest

Fri Feb 08, 2019 3:45 pm   



On Friday, February 8, 2019 at 9:49:57 PM UTC+11, tabb...@gmail.com wrote:
Quote:
On Thursday, 7 February 2019 11:33:57 UTC, bill....@ieee.org wrote:
On Thursday, February 7, 2019 at 8:58:14 PM UTC+11, tabby wrote:
On Thursday, 7 February 2019 03:28:19 UTC, bill....@ieee.org wrote:
On Thursday, February 7, 2019 at 3:21:33 AM UTC+11, tabby wrote:
On Wednesday, 6 February 2019 13:39:25 UTC, Martin Brown wrote:
On 06/02/2019 12:00, tabbypurr wrote:
On Wednesday, 6 February 2019 10:38:46 UTC, Martin Brown wrote:
On 06/02/2019 06:33, John Robertson wrote:

Was this peer reviewed? No one READ it before it was published,
this is on the front page.

Peer reviewed doesn't guarantee quality.

understatement of the century there.

You are *way* too cynical and paranoid.

It has always been the case that about 10% of everything in the peer
reviewed literature is not to put to fine a point on it wrong.

In the one medical subject I have some in-depth knowledge of, 99.9% is wrong. In medicine generally, the figure is 90 something percent.

fact-free rubbish snipped

more fact--free rubbish snipped


NT can't deal with the fact that his opinions aren't widely shared, and writes observations to this effect as "fact-free rubbish", which is a trifle ironic, since those opinions where he has said something factual enough to let one criticise him, aren't exactly evidence based.

Quote:
The whole purpose of the scientific publication process is that once
something is published other researchers can repeat the same
experiment and either confirm or refute the claims made by the first
group.

I assumed we all knew what peer review is. It's a nice idea but there are some issues with it in practice:
1. Research is routinely done for profit, and sponsoring companies inevitably pay researchers that give them the best results. It takes no genius to work out how that goes.

Industrial research is routinely done for profit (or as precaution against future loss). Academic research is largely motivated by a desire to get publications in high prestige journals, and citations for the stuff that gets published.

both of which lead to similar pressures

Cravings for money or prestige both lead to bad behaviour, but rather different sorts of bad behaviour.

both result in pressure to get a result. The same methods get used in both cases.


NT will not - of course - be willing to be specific about the "methods" involved.

Quote:
Pharmacy companies don't normally publish negative results, but that's the only obvious distortion in the process. Academics also find it hard to publish negative results.

heh. There speaks the clueless.

NT hasn't had to listen to academics complaining about not being able to publish negative results. He imagines himself to be clueful, but this does seem to be one more of his ego-boosting delusions.

whoosh


NT does like to pose as somebody who knows what he is talking about, but he retreats into dismissive evasions - like "whoosh" - a little too fast to make the pose tenable. He's just a gullible twit who has got away with posing as knowledgable for a little too long.

Quote:
2. Others can redo the experiment but seldom do unless paid to, which in most cases they aren't. When they are paid to they're under the profit motive, which encourages an awful lot of overlooking & more.

Only some of them are influenced by the profit motive. A large chunk of the motivation for publication is to get noticed - even in profit-driven industry.

both of which...

Look the same to NT, who does have these ego-boosting delusions.

both of which lead to the same result


The result is that stuff get published which doesn't fit NT's favourite delusions. If he had any sense, he'd drop the delusions, but he doesn't and prefers to bad-mouth the peer-reviewed literature.

Quote:
Lying to get noticed does happen

https://en.wikipedia.org/wiki/Diederik_Stapel

but getting found out has catastrophic consequences.

sometimes. Unfortunately nowadays research is routinely accepted from people that have been found fiddling things beforehand - and even research where fiddling has been found within it. This is one of the failings of NICE.

Example?


Oddly enough, NT couldn't find one.

Quote:
c) people working in the field but not having Ph.D. qualifications usually think their voice won't be heard.

A Ph.D. is a remarkably narrow qualification. Mine is in Physical Chemistry, but my publications are entirely within the instrumentation literature. Editors couldn't care less whether you have a Ph.D. and everybody (you excepted) seems to know that.

the point is that there are many who work in fields eg nurses who see the reality entirely at odds with research day in day out. Their findings don't get published.

The reality they see may not be of the kind that is susceptible to publication, which does depend of reproducible effects.

They're simply not in a position to publish their findings.


Actually they are

https://en.wikipedia.org/wiki/Therapeutic_touch

This and other demented ideas do get published, and sometimes do get taught to nurses.

Quote:
Great idea, but it doesn't work as well as one would hope.

If better than any other idea that anybody has come up with.

I see you're unable to read a response before replying as well as short on clues about research.

The Cochrane collaboration is just the peer-reviewed scientific method being forced on a medical establishment that much preferred a more feudal system in which senior people - with ideas that could be quite as silly as NT's - could write what they liked and get it published because they were senior figures.

whoosh


NT hasn't got anything to say, so he posts "whoosh" ...

Quote:
NT is absolutely right - for once - in saying that such data-gathering should be built into any developed nation's health service, but it has only recently become a practical option, and privacy issues do complicate the process.

The need to ask patients for permission does not stop the process at all.

It patients can opt out at random it shrinks the pool of subjects, and could bias it in unpredictable ways

a percentage opting out is not a problem


Anything that biases the sampling is a problem.

Quote:
Ultimately one needs to be realistic about medical research. It's an inherently shall we say messy field, and believing what one is told is generally naive.

more sillyboll snipped


Just a few to start with:

Why Most Published Research Findings Are False
http://www.plosmedicine.org/article/info:doi/10.1371/journal.pmed.0020124

It's a rather extreme claim, based a rather particular application of the word "false".

A slew of problems:
http://fixingpsychology.blogspot.com/2013/01/holiday-special-year-of-scandals-2012.html

I've already cited Diederik_Stapel. The US is bigger than the Netherlands and a few more rogues are to be expected. It isn't exactly proof that everything that is published is fraudulent or even "false".

there's a whole lot more on those pages than Mr. Stapel.


A few rogues isn't a "whole lot" more.

Quote:
http://journals.cambridge.org/action/displayAbstract?fromPage=online&aid=6577844

Peer review does look at timeliness as well as content. Resubmitting old articles isn't probably a good measure. The field does move on.

a team led by University of Virginia’s Brian Nosek repeated 100 psychological experiments and found that only 36% of originally “significant” (in the statistical sense) results were replicated.
https://www.theatlantic.com/science/archive/2015/08/psychology-studies-reliability-reproducability-nosek/402466/

They got to pick the 100 psychological experiments they were trying to replicate. Nobody would waste time replicating the the McGurk Effect

https://en.wikipedia.org/wiki/McGurk_effect

or anything else that is spectacularly reliable.

etc etc etc etc etc etc etc etc.

As if NT wasn't already scraping to bottom of the barrel.

There is the point that psychology uses humans as experimental animals, and they aren't good test subjects. One of my friends swore off human subjects as soon as she could and moved over to bats, who were much more reproducible.

Those links only look at some aspects of the problem. There are more major problems elsewhere in the process.

None of which NT will be able to identify.

I identified some in my first undergraduate research project. It became apparent that I could choose to interpret the data either way I wanted, by emphasising different factors & choosing to eliminate differing data due to some non-obvious issues I could choose to either notice or not notice.


Then you weren't doing anything remotely useful. A properly constructed experiment doesn't give you any room to "emphasise different factors". You observe what happens and report it, and you don't get to choose to eliminate inconvenient data.

Clearly, you had decided that you could get away with cheating, and didn't realise that this made the exercise a complete waste of time.

> None of the articles I've flagged even touch that stuff.

You might try reading up on psychopathic personality disorders. These are the people who cheat all the time and assume everybody else does too.

> And there's no point wasting my time talking with you about them, or indeed anything. You're lost in your own ego & bs.

You don't see any point into talking to people who don't share you bizarre confidence in your own judgement. See above.

--
Bill Sloman, Sydney

John Larkin
Guest

Fri Feb 08, 2019 5:45 pm   



On Fri, 8 Feb 2019 11:51:43 +0000, Martin Brown
<'''newspam'''@nezumi.demon.co.uk> wrote:

Quote:
On 06/02/2019 16:43, John Larkin wrote:
On Wed, 6 Feb 2019 02:56:43 -0500, bitrex <user_at_example.net> wrote:

On 02/05/2019 07:49 PM, bill.sloman_at_ieee.org wrote:
Today's Proceedings of the (US) National Academy of Sciences has a second interesting paper

https://www.pnas.org/content/pnas/116/6/1910.full.pdf

I'm not even sure that I should have labelled this post off-topic.


Cerebral-type engineers tend to find the arts and humanities deeply
terrifying; they've examined the various artifacts those disciplines
produce and can't determine their function. why anyone would expend so
much effort to produce useless things is deeply mysterious and inscrutable.

They usually do it for fame and money. Nothing inscrutable about that.

Very few artists get rich unless some random oligarch takes a real fancy
to their output. Comparatively few make a decent living. Some who are
now very famous names scraped along barely surviving from day to day.


And millions of people still buy Superball lottery tickets.


--

John Larkin Highland Technology, Inc

lunatic fringe electronics

John Larkin
Guest

Fri Feb 08, 2019 5:45 pm   



On Fri, 8 Feb 2019 12:46:05 +0000, Martin Brown
<'''newspam'''@nezumi.demon.co.uk> wrote:

Quote:
On 07/02/2019 05:16, John Larkin wrote:
On Thu, 7 Feb 2019 11:54:12 +1100, Clifford Heath
no.spam_at_please.net> wrote:

On 7/2/19 3:21 am, tabbypurr_at_gmail.com wrote:
On Wednesday, 6 February 2019 13:39:25 UTC, Martin Brown wrote:
On 06/02/2019 12:00, tabbypurr wrote:
On Wednesday, 6 February 2019 10:38:46 UTC, Martin Brown
wrote:
On 06/02/2019 06:33, John Robertson wrote:

Was this peer reviewed? No one READ it before it was
published, this is on the front page.

Peer reviewed doesn't guarantee quality.

understatement of the century there.

You are *way* too cynical and paranoid.

It has always been the case that about 10% of everything in the
peer reviewed literature is not to put to fine a point on it
wrong.

In the one medical subject I have some in-depth knowledge of,
99.9% is wrong. In medicine generally, the figure is 90 something
percent.

It doesn't agree with your prejudices - that is entirely different.

Every description of reality is wrong. Some are just less wrong.
Peer review is one way to start sorting out which.

Someone recently suggested that there should be journals that
publish failed experiments. That makes enormous sense.

Some of the most famous experiments ever have been null results.

The Michelson-Morely experiment to measure the ether drift failed
miserably to detect any ether at all. But it was a massive triumph in
experimental excellence an a prelude to relativity.

Likewise for the Eotvos experiment to look for a difference between
inertial and gravitational mass. Again a null result.

Just because the experiment didn't give the result that people were
expecting doesn't mean it failed. It is the experiments that gave
results that refute the existing paradigm which are remembered forever.

BTW we would be up to the eyeballs in worthless reports of failure to
reproduce the infamous Fleischmann & Pons cold fusion experiment by now
if everyone who tried it wrote just one A4 report.

Peer review would be fun too.

Publishing complete junk does no-one any good.


A properly run failed experiment isn't junk.

A joural of failed experiments would guide future experimenters, and
be an interesting test of published "successful" experiments.

We sometimes go experiments that apear to be failures, but contain
unappreciated effects that turn out to be valuable later.

Quote:

Ensuring that all the data obtained in medical trials is available for
inspection makes good sense otherwise there is a tendency to cherry pick
only those trials which show what the researchers want to see.


Right. And other trials that didn't find the same causality stay
hidden. Information is lost.





--

John Larkin Highland Technology, Inc

lunatic fringe electronics

Martin Brown
Guest

Fri Feb 08, 2019 7:45 pm   



On 08/02/2019 16:25, John Larkin wrote:
Quote:
On Fri, 8 Feb 2019 12:46:05 +0000, Martin Brown
'''newspam'''@nezumi.demon.co.uk> wrote:

On 07/02/2019 05:16, John Larkin wrote:
On Thu, 7 Feb 2019 11:54:12 +1100, Clifford Heath
no.spam_at_please.net> wrote:

On 7/2/19 3:21 am, tabbypurr_at_gmail.com wrote:
On Wednesday, 6 February 2019 13:39:25 UTC, Martin Brown wrote:
On 06/02/2019 12:00, tabbypurr wrote:
On Wednesday, 6 February 2019 10:38:46 UTC, Martin Brown
wrote:
On 06/02/2019 06:33, John Robertson wrote:

Was this peer reviewed? No one READ it before it was
published, this is on the front page.

Peer reviewed doesn't guarantee quality.

understatement of the century there.

You are *way* too cynical and paranoid.

It has always been the case that about 10% of everything in the
peer reviewed literature is not to put to fine a point on it
wrong.

In the one medical subject I have some in-depth knowledge of,
99.9% is wrong. In medicine generally, the figure is 90 something
percent.

It doesn't agree with your prejudices - that is entirely different.

Every description of reality is wrong. Some are just less wrong.
Peer review is one way to start sorting out which.

Someone recently suggested that there should be journals that
publish failed experiments. That makes enormous sense.

Some of the most famous experiments ever have been null results.

The Michelson-Morely experiment to measure the ether drift failed
miserably to detect any ether at all. But it was a massive triumph in
experimental excellence an a prelude to relativity.

Likewise for the Eotvos experiment to look for a difference between
inertial and gravitational mass. Again a null result.

Just because the experiment didn't give the result that people were
expecting doesn't mean it failed. It is the experiments that gave
results that refute the existing paradigm which are remembered forever.

BTW we would be up to the eyeballs in worthless reports of failure to
reproduce the infamous Fleischmann & Pons cold fusion experiment by now
if everyone who tried it wrote just one A4 report.

Peer review would be fun too.

Publishing complete junk does no-one any good.

A properly run failed experiment isn't junk.


I intended to synthesise 2,4-dinitrophenyl oxalate (the cyalume patent
glostick compound) but the nitrogen blanket failed and I ended up with
useless impure brown gunge. How is that of any use to anybody?
Quote:

A joural of failed experiments would guide future experimenters, and
be an interesting test of published "successful" experiments.


You might enjoy "The Journal of Irreproducible Results" http://jir.com/
Quote:

We sometimes go experiments that apear to be failures, but contain
unappreciated effects that turn out to be valuable later.


Such experiments usually *are* published in the literature - at least in
the hard sciences. Failure to find the ether drift for example.
It is hard to think of a more famous null result experiment.

Quote:
Ensuring that all the data obtained in medical trials is available for
inspection makes good sense otherwise there is a tendency to cherry pick
only those trials which show what the researchers want to see.

Right. And other trials that didn't find the same causality stay
hidden. Information is lost.


I am in favour of keeping the data mainly because you may be able to
trawl through it later and pull signal out of noise in any large dataset
once you actually know what it is you are looking for.

Challis had observed Neptune a month earlier than its recognised
discoverer and would not have bothered looking at all but for Airy's
intervention to make him do it. He had seen it first but lack of good
charts and time meant he didn't recognise the fact. Adams predictions
were spot on but Urbain le Verrier had the same result and a more
willing German observer who confirmed his prediction on 23 Sept 1846.

https://en.wikipedia.org/wiki/James_Challis#The_search_for_the_eighth_planet

--
Regards,
Martin Brown

Goto page Previous  1, 2, 3, 4, 5, 6, 7  Next

elektroda.net NewsGroups Forum Index - Electronics Design - OT: science, technology engineering, mathematic and medical

Ask a question - edaboard.com

Arabic version Bulgarian version Catalan version Czech version Danish version German version Greek version English version Spanish version Finnish version French version Hindi version Croatian version Indonesian version Italian version Hebrew version Japanese version Korean version Lithuanian version Latvian version Dutch version Norwegian version Polish version Portuguese version Romanian version Russian version Slovak version Slovenian version Serbian version Swedish version Tagalog version Ukrainian version Vietnamese version Chinese version Turkish version
EDAboard.com map