introduction to scientific method and experimental design

36
Introduction to Scientific Method and Experimental Design MED610 Pt 1: Thinking Scientifically Dr Colby L. Eaton

Upload: others

Post on 08-Dec-2021

5 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Introduction to Scientific Method and Experimental Design

Introduction to Scientific Method and Experimental Design MED610

Pt 1: Thinking Scientifically

Dr Colby L. Eaton

Page 2: Introduction to Scientific Method and Experimental Design

What is our world view? -how do we make decisions and solve problems in our lives?

How is our world view informed? -why do we adopt particular approaches to decision making and problem solving?

Is our approach/world view scientific?

Research thinking: Some general thoughts

Do we bring our world view into our research?

Do we have to re-educate ourselves to do good science?

How do we view facts?

What is truth?

Page 3: Introduction to Scientific Method and Experimental Design

What is our world view and how is it informed?

-we acquire knowledge:school, college, parents, friends, faith

-Media, internet

-we make decisions to solve problems we face and experience success and failure

What do we learn from this?

Success: affirms we were right, we repeat this choice, don’t think about the problem again

Failure: we dismiss this choice, try another

Usually we don’t have time or the inclination to ask why our decision resulted in failure or design experiments to find out why.

Page 4: Introduction to Scientific Method and Experimental Design

Is our approach/world view scientific?

Do we bring our world view into our research?

-if all we do is gather information and address problems based on our knowledge, accept successes and never investigate why we failed, the answer is NO.

Do we have to re-educate ourselves to do good science?

The answer is probably yes: a scientific approach is not intuitive. It requires us to systematically validate our ‘successes’ and provide explanations for failures in research. We don’t usually do this in our everyday lives.

There is a strong chance that this will happen: -we accept what we regard as a successful experiment without question because it is what we wanted, we don’t check if it is right.

-we discount those that ‘didn’t work’

-we fail to design a clear experiment to find out why the experiment ‘failed’ (i.e. gave an unexpected result), we just try something else.

Page 5: Introduction to Scientific Method and Experimental Design

How do we view facts?

Dictionary definition: Fact

-something that actually exists; reality; truth: Your fears have no basis in fact.

-something known to exist or to have happened: Space travel is now a fact.

-a truth known by actual experience or observation; something known to be true: Scientists gather facts about plant growth.

Are these definitions useful or clear from a scientific viewpoint?

From a scientific viewpoint, a fact is a descriptive term for a verifiable observation

It can be difficult to distinguish fact from a hypothesis or theory

Are these facts?

The earth is a roughly spherical planet that spins on its axis and orbits the sun

Atoms are only composed of neutrons, protons and electrons

It is not possible for a particle to travel faster than the speed of light

Page 6: Introduction to Scientific Method and Experimental Design

What is truth?This is a vast subject that is at the centre of philosophy and has a number of definitions often argued over.

From a scientific viewpoint, the degree to which an observation is true may be potentially useful in that it is defined by its susceptibility to be falsified.

A statement like: it is true that this is a cat

Is more likely to be true than the statement: it is true that substance X causes cancer in humans

-which may be disproved by later experiments.

In practical terms, notions of the truth may be used to distinguish factual descriptions from hypotheses -if we think a statement of truth is likely to be falsified then it is more likely to be a hypothesis or theory than a simple description. It is important to recognize these differences.

Page 7: Introduction to Scientific Method and Experimental Design

Thinking about your research project

What do we want to know? How are we going to do it?

Are we going to be scientists or observers?

Scientific method vs Inductive reasoning

Developing concepts Methods development/process optimisation

Page 8: Introduction to Scientific Method and Experimental Design

Surveys and correlations: the ‘Achilles heel’ of medical researchMany research projects that aim to solve a medical problem start and end with a survey:

Does eating red meat cause colorectal cancer?

Interview people with/without colorectal cancer and find out that those that had the disease had eaten more red meat. Conclude that eating red meat is a cause of the disease or is at least a ‘risk factor’ for everyone. Present findings as scientific fact. Publicize on popular news programmes.

But, there are obviously other factors:

Age, sex, genetics, lifestyle: smoker, alcohol consumption, fat/fibre intake, exposure to ionizing radiation/carcinogens in lifetime or parents’ lifetime, viral infection, inflammation etc.

It is impossible to account for all these variables although some studies claim to have done this.

Page 9: Introduction to Scientific Method and Experimental Design

Surveys and correlations: the ‘Achilles heel’ of medical research

This approach is peculiar to medical research amongst scientists:

For example, you wouldn’t expect the structural engineer investigating why the righthand bridge below collapsed to look for correlations between bridge colour and collapse because he noticed the intact bridge was all grey and the other two-tone.

He/she would do some experiments to test various theories for the collapse based on engineering principles related to the underlying structure of bridges. This is because these principles are relatively simple and can be explored easily.

Problems in biology are often complex with contributing factors ill-defined so correlations can be tempting but are often misleading

Page 10: Introduction to Scientific Method and Experimental Design

Medical research is hampered by not being able to directly test the effects of a suspected causative factor in an experiment: you can’t feed groups of genetically identical people a diet with/without red meat to see if they get cancer in 50 years.

Surveys and correlations: the ‘Achilles heel’ of medical research

You could use an animal model but these have their own problems

Surveys/correlations should be treated with extreme caution, they are a prime example of inductive reasoning which is by definition not scientific.

they might not be like humans.

Page 11: Introduction to Scientific Method and Experimental Design

Final note on correlations:

For those of you who watch what you eat, here's the final word on nutrition and health. It's a relief to know the ‘truth’ after all those conflicting nutritional studies.

1. The Japanese eat very little fat and suffer fewer heart attacks than Brits.

2. The Mexicans eat a lot of fat and suffer fewer heart attacks than Brits.

3. The Chinese drink very little red wine and suffer fewer heart attacks than Brits.

4. The Italians drink a lot of red wine and suffer fewer heart attacks than Brits.

5. The Germans drink a lot of beer and eat lots of sausages and fats and suffer fewer heart attacks than Brits.

EUREKA! Eat and drink what you like. Speaking English is what kills you!

Page 12: Introduction to Scientific Method and Experimental Design

Scientific method and inductive reasoning in designing experiments:

Inductive reasoning:

• Information thought to be related to a particular problem is collected.

• The information is analysed with reference to current knowledge and conclusions are drawn.

The problems with this are:

• Collected data may not be related to the problem and conclusions may therefore be wrong or irrelevant.

• Conservative interpretation is important: data correlative rather than definitive.

Page 13: Introduction to Scientific Method and Experimental Design

The problem is that we often accept inductive reasoning as a sound approach in our everyday lives:

I was personally lured into this approach in 1997.

In September that year, my son was knocked down on Ecclesall Road coming home from school. He was injured but made a full recovery. The parents decided that they needed to make a case for a crossing point so we followed Highways Dept guidelines…..

The Highways Dept say that you have to have 5 serious accidents in a stretch of road within 5 years for it to consider changing driving conditions: restrict speed, install cameras etc. They claim this is a logical, scientific approach to road management.

We were certain a survey would show that this was a dangerous section of road worthy of a crossing by the Depts criterion which we thought was logical.

But is it……

Scientific method and inductive reasoning in designing experiments:

Page 14: Introduction to Scientific Method and Experimental Design

Silverdale school

My ‘anguished father’ experiment: a traffic accident survey of Ecclesall Road South, 1986-1996. ~600 students cross road (red arrows), accidents marked (blue dots, son’s orange) Inductive conclusions:

• There were 13 accidents in 10y

• Looks like it’s more dangerous to cross at the crossings!

• We certainly don’t need any more crossings -should we take out those we’ve got?

There are too many unevaluated variables to extrapolate the data to the last conclusion:

• Position of crossings re-traffic visibility and speed or distractionsi.e. crossing points were blackspots for accidents before crossings installed -would need a controlled expt.

At least we can identify potential problems here. This is not as easy with, say, an immunohistochemical survey of expression of an antigen vs progression in 2000 tumours so we must be careful not to over-interpret any correlations made.

10%

10%

80%, ~350-400

Page 15: Introduction to Scientific Method and Experimental Design

There is also another intrinsic problem with all data collection exercises:

Just because an observation has been made a number of times in the past you can never be sure it will occur again even if there is a high probability it will:

ie that the sun will rise every day is a psychological expectation and not strictly logical!

There are no certainties but sometimes inductive reasoning assumes that there are. This can be misleading.

This leads us on to……….

Page 16: Introduction to Scientific Method and Experimental Design

Scientific method and inductive reasoning in designing experiments:

Scientific method:

• Accepts that there are no certainties, simply evolving theories or models that help us understand a problem.

• Starts with a hypothesis, which is a statement of what we think is going to happen in the experiment, essentially a best guess.

• The hypothesis is tested and either supported for now or new ideas and hypotheses arise from its destruction.

Page 17: Introduction to Scientific Method and Experimental Design

Scientific method and inductive reasoning in designing experiments:

The problem with scientific method is that it is easy to fall into the trap of trying to prove a hypothesis.

Scientific method recognises that there is logical asymetry between proving and falsifying a hypothesis: proof is not possible, falsification is. Experiments should therefore be designed to falsify the hypothesis, ie test it to destruction. This is Karl Popper’s major contribution to the philosophy of science.

Provided the hypothesis has been rigorously tested, we should be happy it is destroyed because we will have learnt something

Page 18: Introduction to Scientific Method and Experimental Design

Hypothesis: all swans are white.

It is not possible to prove all swans are white no matter how many you

bring. It is possible to falsify the hypothesis by finding

a black swan!

Modified hypothesis: Swans are white or black -we can look for different coloured swans to falsify this + we’ve learnt that swans from Australia are black

Page 19: Introduction to Scientific Method and Experimental Design

Hypothesis: water always boils at 100oC

This happens at sea level in an open vessel but however many times we do it we can’t prove this is always the case. However if we put a lid on it...

Or take it up a mountain…..

The kettle will boil at temperatures other than 100oC.

Yep, you can’t make tea on Everest but we’ve learnt something about water boiling.

Page 20: Introduction to Scientific Method and Experimental Design

What is a testable hypothesis?

§ A testable hypothesis is a clear description of what you predict will happen in an experiment.

§ It must not include conditional words: ‘may’ or ‘could’

§ It must be capable of being disproved

For example a statement like:

We hypothesize that substance x causes lung cancer -is a testable hypothesis

While statements like:

We hypothesize that substance X may cause lung cancer or we want to investigate whether substance X could cause lung cancer -are not testable hypotheses

Statements like: We hypothesize that substance X caused cancer in patients attending our clinic over the last 20 years -cannot be disproved in an experiment and so while this might be a hypothesis it is not testable

Page 21: Introduction to Scientific Method and Experimental Design

Scientific method and inductive reasoning in designing experiments:

It is very easy to forget to follow scientific method. Here is some advice to avoid being mislead:

1) Be suspicious of apparently powerful, ‘infallible’, analyses eg molecular biology.

Failure to be scientific in this case can lead to some horribly wrong conclusions:

eg early DNA fingerprinting was deemed to be absolutely safe (eg >1,000,000:1 chance against it being wrong) in identifying individuals until it was challenged in court and found to be unsafe.

It turned out that when the original method was tested in volunteers and 2-3/500 had the same fingerprint, this was put down to pipetting error because the inventors believed the technique was infallible. The data did in fact falsify this hypothesis and lead to better fingerprinting techniques.

This also happens in day to day research, even if we start with good intentions…..

2) Don’t be fooled by an exciting but expected result...

Page 22: Introduction to Scientific Method and Experimental Design

Identifying splice-variants of gene X

Hypothesis: Only one splice variant of gene X is expressed in mammalsYou decide to test this -you’re doing well, being a good scientist

Use kit to isolate RNA from different cell types

You do an RT-PCR expt with primers designed to cross intron-exon boundaries that could potentially pick up splice variants

You find different RNAs give different size products -you’re very happy!

You isolate fragments, clone and sequence them and find that there is some homology and some differences to known sequences

You express the sequence and raise antibodies to the peptide

You check whether the protein is expressed in the cells you isolated the RNA from but don’t find it or find it in all cells.

You come up with a number of explanations for this

The problem really starts here. Up to this point you have been scientific, but here you start to believe you’ve found something important and set out to prove it. You should ask: Can the new PCR products be generated by ways other than splice variation, ie test the hypothesis: The fragments generated are slice variants.

These products could be:

§ generated by mispriming of contaminating DNA in RNA samples of sequences within gene X (eg in the intervening intron)

• generated by self priming of gene X and an unrelated misprimed sequence, DNA or RNA.

You should ask whether the sequences are represented in an appropriate cDNA library, contain genomic sequences even though the primers were designed to exclude this or have the appropriate sequence configuration for spliced RNA?

Page 23: Introduction to Scientific Method and Experimental Design

The major lesson from this is:

Be hypercritical of your data and exhaustively exclude all the potential artifacts that might have given an expected result before you accept that the data supports your hypothesis.

Even then be open to revisiting it in the light of new findings as it is always vulnerable to falsification. Even Newton’s laws have had to be modified.

Page 24: Introduction to Scientific Method and Experimental Design

3) Be suspicious of statements/assumptions that purport to be certainties/truths: These are usually hypotheses that have become accepted as truth but which haven’t been tested. They are also a good point to start your work: challenge them.

4) Don’t avoid the ‘acid test’, ie the experiment that could destroy your hypothesis.

5) Don’t ignore/ dismiss the unexpected result -design an experiment to test it.

Page 25: Introduction to Scientific Method and Experimental Design

Scientific method and inductive reasoning in designing experiments:

Scientific method also leads to more interesting, novel lines of research with an ever increasing number of new hypotheses and questions to test them. It can be argued that all major advances in our understanding of the world derive from a scientific approach.

In contrast, inductive reasoning can close down areas of research since it tries to provide inclusive conclusions to the data collected.

Page 26: Introduction to Scientific Method and Experimental Design

Paracrine?Autocrine?

OPG is a factor made exclusively by bone cells to

modulate bone turnoverAccept assumption Test assumption

Serum OPG is likely to be raised in conditions where

bone is remodelled

Serum OPG will be raised in prostate cancer patients

with metastases

Do cells other than bone cells, eg tumour cells, make OPG?

Conclusion: raised serum OPG in CaP patients is due to increased

bone turnover: ie a result of tumour activity with little

relevance to tumour growth

Confirmed experimentally

Test other cells: find some tumour cells make OPG

Why do these cells make OPG?

Is all OPG the same?

Structure?

Function?

Comparison of approaches taken to OPG studies

Discovery of other actions of OPG

Page 27: Introduction to Scientific Method and Experimental Design

What does all this mean for you?• Be suspicious of so called ‘definitive’ or ‘infallible’ methods - question them.

• Be suspicious of ‘certainty’ statements/assumptions -use testing them as starter points for work.

• Don’t do experiments without a hypothesis ie an idea of what results you predict -write predicted data down.

• Use more than one method to test the same hypothesis.

• The controls are at least as important as the test parts of the experiment - they validate your test so choose them carefully.

• Don’t fall into the trap of trying to prove hypotheses.

• Don’t be afraid of the ‘acid test’ that will destroy your ‘pet’ hypothesis.

• Be happy when your ‘pet’ hypothesis is destroyed, you’ll soon get another one and you will have learnt something.

• If you do an observational experiment and report it, don’t oversell its implications.

Page 28: Introduction to Scientific Method and Experimental Design

Your approach should not be like this:

Or this

It’s black and looks like a hole, I’d say it’s a Black Hole

Dr Renly believes he’s close to the answer. He’s discovered that the clusters of galaxies spell out something but he’s still missing a few letters.

Page 29: Introduction to Scientific Method and Experimental Design

But more like this:

You shouldn’t ignore the unexpected result it is probably the most important!

Page 30: Introduction to Scientific Method and Experimental Design

Thinking about your research project

What do we want to know? How are we going to do it?

Developing concepts Methods development/process optimisation

Page 31: Introduction to Scientific Method and Experimental Design
Page 32: Introduction to Scientific Method and Experimental Design

Taguchi experimental design using orthogonal arrays:A way of limiting test numbers in experiments designed to optimize parameters inexperiments with large numbers of interacting variables (eg PCR, ELISA etc) orwhere materials are precious/scarce (embryo research).

General principle:

Standard factorial designs are OK for small numbers of variables (i.e. 3 differentagents tested on a cell line at different concentrations -requires 23 tests to lookat all possible combinations = 8 tests to do it once. With repeats and/or moreconcentrations this can still get large.

With larger numbers of test parameter (agents or concentrations) things canreally get out of hand. For example 7 test parameters examined at 2 levels:factorial design = 27 = 128 tests to do it once.

Taguchi suggested the use of an orthogonal array table to approach this, limitingthe tests to just 8 and obtaining as much information. This would be even moreuseful as parameter numbers increase: a 15 parameter experiment with 2 levelswould require 16 tests using the orthogonal array approach and 215 = 32,768 usingthe standard factorial design. The key feature of the array is balance (henceorthogonal).

Page 33: Introduction to Scientific Method and Experimental Design

1 1 1 1 1 1 11 1 1 2 2 2 21 2 2 1 1 2 21 2 2 2 2 1 12 1 2 1 2 1 22 1 2 2 1 2 12 2 1 1 2 2 12 2 1 2 1 1 2

Parameter

A B C D E F G

2 levels are tested for each parameter designated 1 or 2 (i.e. low or high concns)

Orthogonal array with 7 parameters tested at 2 levels

1

3

2

4

5

6

7

8

Expt. test

Page 34: Introduction to Scientific Method and Experimental Design

3 parameters, 3 levels =27 (factorial) 9 (orth)

Page 35: Introduction to Scientific Method and Experimental Design

How are we going to do it?Methods development/process optimisation

General principles:

Keep experiments simple, minimise variables.

If you need to optimise multiple variables consider Taguchi/orthogonal designs to minimise work up time

If you are doing experiments where statistical analysis is likely consult a statistician before you start.

Page 36: Introduction to Scientific Method and Experimental Design