Skip to content

Evaluation

Part of Module 1: Development of practical skills in biology.

Evaluation asks whether a practical result deserves confidence and, if not, why not. It involves identifying weaknesses in method and measurement, distinguishing between precision and accuracy, and suggesting improvements that actually address the problem found.

Learning Objectives

ID Specification-aligned objective Main teaching sections
1.1.4-lo-1 Evaluate practical results and draw conclusions that follow from the evidence collected. Core Idea, What Evaluation Should Cover
1.1.4-lo-2 Identify anomalies, limitations and sources of uncertainty in practical work. What Evaluation Should Cover, Common Pitfalls
1.1.4-lo-3 Distinguish precision from accuracy and explain how each affects confidence in a method or result. Useful Improvement Logic, Applied Contexts
1.1.4-lo-4 Suggest improvements that directly address the weakness identified rather than offering generic fixes. Useful Improvement Logic, PAG-Linked Evaluation Patterns

Core Idea

  • A conclusion can be reasonable and still be weak if the method did not control variables properly or the data were too uncertain.
  • An anomaly is a result that does not fit the overall pattern. It should not simply be deleted; it should be recognised and considered.
  • Precision is about how close repeated measurements are to one another. Accuracy is about how close a result is to the true value or to the intended measurement.
  • Uncertainty comes from apparatus limits, human judgement and method design, so it should be discussed explicitly.

What Evaluation Should Cover

  • Whether the conclusion matches the evidence collected.
  • Whether the method really measured the biological variable of interest.
  • Whether there were uncontrolled variables that could have altered the outcome.
  • Whether the apparatus had enough sensitivity and precision.
  • Whether repeat measurements would likely improve reliability.

Useful Improvement Logic

  • If timing is uncertain, use automatic sensors or a clearer end point.
  • If colour judgement is subjective, use a colorimeter instead of the eye.
  • If a biological sample varies naturally, increase repeats or sample size.
  • If environmental conditions drift, control temperature, pH, light or concentration more carefully.

Common Pitfalls

  • Suggesting vague improvements such as "be more careful".
  • Naming a weakness without explaining how it affected the result.
  • Confusing accuracy with precision.
  • Treating all anomalies as mistakes rather than possible evidence of biological or method variation.

Applied Contexts

  • Enzyme practicals often raise issues of temperature control, subjective colour end points and mixing delays.
  • Potometer work raises issues of leaks, air bubbles and the fact that uptake is an estimate of transpiration, not a direct measurement.
  • Field sampling raises questions about sample size, bias and whether the sample represented the habitat fairly.

PAG-Linked Evaluation Patterns

  • Microscopy can fail through poor calibration, unclear staining or crushed specimens, so measurement error and image quality should be judged together.
  • Colorimetry is usually a stronger improvement than judging colour by eye, but only if the blank, filter and cuvette handling are themselves consistent.
  • Chromatography can separate poorly if the starting spot is too large or if similar substances have similar Rf values, so identification may remain uncertain.
  • Microbial results should always be questioned for contamination, unequal inoculation and inconsistent incubation conditions.
  • Response investigations often show genuine biological variation between organisms, so repeats, sample size and careful control comparisons matter more than a single dramatic result.

Key Terms

  • Accuracy: closeness of a result to the true value or to what was intended to be measured.
  • Precision: closeness of repeated measurements to one another.
  • Uncertainty: the doubt attached to a measurement because of apparatus limits or method design.
  • Reliability: the extent to which repeated measurements give consistent results.
  • Validity: the extent to which the investigation measured the biological factor it was supposed to measure.
  • Improvement: a specific change to method or apparatus that directly addresses an identified weakness.

Connected Pages