Lesson Flow

Learn

Goals and Concepts

Start with the capability target and concept set for this module.

Practice

Studio Activity

Apply the ideas in a guided activity tied to realistic outputs.

Check

Assessment Rubric

Use the rubric to verify competency and identify improvement targets.

Interactive Lab

Practice in short loops: checkpoint quiz, microtask decision, and competency progress tracking.

Checkpoint Quiz

Q1. Which output most clearly demonstrates module competency?

Competency is shown through measurable, method-linked evidence.

Q2. What should always accompany a technical claim in this curriculum?

Every claim should include boundaries and uncertainty.

Q3. What is the best next step after identifying a gap in understanding?

Progress improves when gaps become explicit practice targets.

Proofreading Triage Microtask

How should correction queues be prioritized?

Progress Tracker

State is saved locally in your browser for this module.

0% complete

Proofreading Error Annotation

Click the hotspot most likely to represent a high-impact merge/split error.

Segmentation and proofreading figure

Selected hotspot: none

Capability target

Design one hypothesis test with metric, null model, and interpretation boundary statement.

Concept set

1) What makes a connectomics hypothesis testable?

A testable connectomics hypothesis must specify: (a) a structural feature that can be measured from the reconstructed data (e.g., synapse count, motif frequency, path length), (b) a comparison or null expectation (e.g., “more frequent than in a degree-preserving random graph”), and (c) an interpretation boundary (what the result does and does not prove). Many fascinating biological questions (“How does the cortex generate consciousness?”) are not directly testable with connectomics because they lack measurable structural endpoints.

Good hypothesis example: “Reciprocal connections between L2/3 pyramidal cells are enriched ≥2× compared to a degree-preserving null model.” — Measurable (synapse counts), has a null model, makes a specific quantitative prediction.

Poor hypothesis example: “The connectome explains how the brain processes language.” — No measurable endpoint, no null model, no interpretation boundary. This is an aspiration, not a hypothesis.

2) Choosing the right metric

The metric must match the hypothesis. Common connectomics metrics include:

Metric mismatch trap: Using a global metric (mean degree) to test a local hypothesis (microcircuit-specific wiring rule). The global metric may be normal even if the local pattern is highly abnormal. Always match the metric’s spatial and biological scope to the hypothesis.

3) Null models are the foundation of interpretation

Every connectomics claim requires comparison to a null model. Without a null, you cannot distinguish specific wiring rules from generic statistical properties.

4) Interpretation boundaries: what you can and cannot claim

Structure constrains possible computation but does not determine function. A connectomics result can say “this wiring pattern is consistent with function X” or “this wiring pattern is more common than expected,” but it cannot say “this circuit computes X” without functional evidence. Always state both the supported claim and the explicit non-claim.

Core workflow

  1. Define question and estimand: what structural feature would constrain or inform the biological question?
  2. Choose measurable outputs: specific metric(s) computed from the connectome graph.
  3. Select null model: the most stringent null relevant to the claim.
  4. Test and interpret results: compute metric, compare to null distribution, compute z-score and p-value.
  5. Document supported vs unsupported claims: what the result proves, what it doesn’t, and what additional evidence would be needed.

60-minute tutorial run-of-show

Pre-class preparation (10 min async)

Minute-by-minute plan

  1. **00:00-08:00 Framing: good vs bad hypotheses**
    • Show 4 example hypotheses (2 good, 2 poor). Group identifies which are testable and why.
    • Key criteria: measurable endpoint, specified null, interpretation boundary.
  2. **08:00-20:00 Hypothesis drafting**
    • Each learner drafts a hypothesis using a template:
      • “In [dataset/region], [structural feature] is [comparison] compared to [null model].”
      • “This would support [interpretation] but would NOT prove [over-claim].”
    • Peer review: partner evaluates whether the hypothesis is testable.
  3. **20:00-34:00 Metric and null model selection**
    • For each drafted hypothesis, select the appropriate metric and null model.
    • Instructor walks through one example end-to-end: hypothesis → metric → null → expected result → interpretation.
    • Discussion: “What happens if you use the wrong null model?” Show how the same data looks significant or non-significant depending on null choice.
  4. **34:00-46:00 Interpretation workshop**
    • Present 3 pre-computed results (with p-values and z-scores). For each, learners write:
      • Supported claim (what the data shows)
      • Explicit non-claim (what the data does NOT show)
      • One confound that could explain the result
    • Group discussion of each result.
  5. **46:00-60:00 Competency check**
    • Each learner submits their final hypothesis with metric, null model, and interpretation boundaries.
    • Exit ticket: “Write one claim and one explicit non-claim from the same test outcome.”

Studio activity: hypothesis design and peer critique (60-75 minutes)

Scenario: Your lab is planning a study of feedforward vs feedback connectivity in mouse visual cortex using the MICrONS dataset. You need to design three testable hypotheses about the circuit architecture.

Task sequence:

  1. Draft 3 hypotheses (one about feedforward connections, one about feedback connections, one about reciprocal connections).
  2. For each: specify the metric, null model, required dataset version, and analysis code outline.
  3. For each: write the supported claim and explicit non-claim.
  4. Exchange with a partner. Critique their null model choices and interpretation boundaries.
  5. Revise based on peer feedback.

Expected outputs:

Assessment rubric

Content library references

Teaching resources

References

Quick practice prompt

Write one claim and one explicit non-claim from the same test outcome.

Teaching Materials

Activity Worksheet

Learner worksheet aligned to the studio activity and rubric.

Open worksheet

Slide Source

Marp source file for editing and rendering.

course/decks/marp/modules/module08.marp.md

Related Content