Experimental Design

Randomization, replication, control, and blocking

Experimental Design

Principles of Experimental Design

Three fundamental principles ensure valid experiments:

1. Control

Control confounding variables by keeping conditions constant except for treatment.

Methods:

  • Hold variables constant (same temperature, time of day, etc.)
  • Block on variables you can't control
  • Use control group (receives no treatment or standard treatment)

Example: Testing fertilizer, keep water, sunlight, soil type constant.

2. Randomization

Randomly assign experimental units to treatments.

Why it matters:

  • Eliminates systematic bias
  • Balances unknown confounding variables
  • Allows cause-effect conclusions

Random assignment ≠ random sampling!

  • Random sampling: selecting participants (for generalization)
  • Random assignment: assigning treatments (for causation)

3. Replication

Use adequate number of experimental units in each treatment group.

Why it matters:

  • Reduces effect of chance variation
  • Increases reliability of results
  • Allows assessment of treatment variation

Don't confuse with repetition:

  • Replication: Multiple experimental units per treatment
  • Repetition: Multiple measurements on same unit

Types of Experimental Designs

Completely Randomized Design (CRD)

Method:

  1. Randomly assign all experimental units to treatments
  2. Each unit has equal chance of any treatment

When to use: Experimental units are homogeneous

Example: 60 students randomly assigned to 3 study methods (20 per method)

Advantages: Simple, easy to analyze
Disadvantages: Doesn't account for variation among units

Randomized Block Design (RBD)

Method:

  1. Group experimental units into blocks (similar units)
  2. Randomly assign treatments within each block
  3. Each treatment appears in each block

When to use: Experimental units vary on important characteristic

Example: Test teaching methods. Block by math ability (high/medium/low). Within each ability level, randomly assign to teaching methods.

Purpose: Reduce variability, increase precision

Key: Blocking variable known before experiment; accounts for variation you expect

Matched Pairs Design

Special case of RBD with:

  • Two treatments only
  • Blocks of size 2 (matched pairs)

Two types:

Type 1: Natural pairs

  • Twins, siblings, matched subjects
  • Randomly assign one to treatment A, other to treatment B

Type 2: Same subject

  • Each subject receives both treatments
  • Random order (to avoid order effects)

Example: Test two medications on same patients (different times), random order

Controlling Variability

Blinding

Single-blind: Subjects don't know which treatment they receive
Double-blind: Neither subjects nor evaluators know treatment assignment

Why blind?

  • Prevents placebo effect (psychological response to treatment)
  • Reduces bias in evaluation
  • Increases objectivity

Example: Drug study - patients don't know if they get drug or placebo (single-blind), and doctors evaluating don't know either (double-blind)

Placebo

Placebo: Fake treatment that appears identical to real treatment

Purpose: Control for placebo effect (improvement from belief in treatment)

Control group receives placebo, not just "no treatment"

Blocking

Block: Group of similar experimental units

Purpose: Reduce variability within treatment groups

Example: Block by gender if you expect men and women to respond differently

Within each block, randomly assign treatments

Sample Size and Statistical Significance

Larger sample sizes:

  • Detect smaller treatment effects
  • More likely to find statistical significance
  • More reliable results

But: Practical and ethical limits exist

Balance: Large enough for reliable results, not wastefully large

Experimental Terminology

Experimental Unit: Individual/item receiving treatment
Treatment: Specific condition applied
Factor: Explanatory variable (what you manipulate)
Level: Specific value of factor
Response Variable: Outcome measured

Example: Testing two fertilizers and two watering schedules

  • Factors: Fertilizer (2 levels), Watering (2 levels)
  • Treatments: 2 × 2 = 4 treatment combinations
  • Experimental units: Plots of land
  • Response: Plant growth

Scope of Inference

Random assignment → Causation
Can conclude treatment caused difference in response

Random sampling → Generalization
Can generalize results to population

Ideal: Both random sampling and random assignment
Common: Random assignment only (can show causation but only for these specific subjects)

Common Design Flaws

No randomization: Bias in treatment assignment
No control group: Nothing to compare to
Too small sample: Can't detect real effects
Confounding: Variables changing with treatment
No blinding: Placebo effect, evaluation bias
No replication: Can't assess variability

Designing an Experiment: Checklist

  1. Identify response variable and explanatory variable(s)
  2. Choose treatments (levels of factors)
  3. Select experimental units
  4. Randomly assign units to treatments
  5. Apply treatments
  6. Measure response
  7. Compare treatment groups
  8. Use control, randomization, replication
  9. Consider blocking, blinding, placebo as appropriate

Quick Reference

Three Principles:

  • Control: Keep other variables constant
  • Randomization: Random treatment assignment
  • Replication: Adequate sample size

Designs:

  • CRD: Random assignment to all treatments
  • RBD: Block then randomize within blocks
  • Matched Pairs: Blocks of size 2

Important Techniques:

  • Blinding: Prevent bias
  • Placebo: Control for psychological effects
  • Blocking: Reduce variability

Remember: A well-designed experiment can establish causation. Poor design leads to unreliable or invalid results, no matter how much data you collect!

📚 Practice Problems

No example problems available yet.