06. Segmentation 101
Core segmentation error taxonomy—merges, splits, boundary errors—and a practical correction workflow with documented quality impact.
Apply concepts in practical workflows, quality control, tools, and reproducible research operations.
This track focuses on applying connectomics knowledge to real research workflows: running analyses on petascale datasets, applying machine learning and computer vision to EM data, maintaining reproducibility, and producing publication-ready outputs. Resources connect directly to the MouseConnects dataset, the Connectome Quality tool, and the workflow pipeline from acquisition through circuit interpretation. Learners should have completed the Core Concepts & Methods foundation before focusing here.
Fadel alignment: Skills, Meta-learning
Core segmentation error taxonomy—merges, splits, boundary errors—and a practical correction workflow with documented quality impact.
Proofreading strategies that prioritize scientifically high-impact corrections and maintain reproducible, documented QC standards.
Scalable data architecture, query planning, and provenance tracking for petascale connectomics datasets like MICrONS and H01.
ML workflows for connectomics with controls for data leakage, spatial correlation bias, and biologically meaningful evaluation metrics.
Computer vision methods—from classical filters to deep learning—applied to EM imagery for segmentation support, morphology extraction, and quality diagnostics.
LLM-assisted patch triage and annotation support with human-in-the-loop verification gates to prevent hallucination and unsupported scientific inference.
Principled visualization of connectomics structures and analysis results: encoding uncertainty, avoiding misleading representations, and producing publication-ready figures.
Reproducible preprocessing workflows from raw connectomics data through analysis-ready releases with integrity checks, QC metrics, and full provenance.
Defensible statistical inference for connectomics: choosing null models, controlling multiplicity in high-dimensional tests, and reporting with explicit assumptions.
Operationalizing FAIR principles and reproducibility standards for connectomics datasets, analysis code, and public releases.
End-to-end process from acquisition through interpretation.
Quality criteria and practical checks for robust outputs.
Structured support for technical troubleshooting from Dr. Jeff Lichtman.
Proofreading and analysis-heavy units for applied practice.
Research and teaching frameworks to operationalize practice.
Filter concepts by immediate need to surface practical research resources quickly.
Track: research-in-action
User needs: prioritizing corrections, reporting quality rigorously
Classify error modes, apply correction workflows, and tie decisions to quantitative quality metrics.
How to learn it: Triage corrections by scientific impact and report QC metrics that directly drive release decisions.
Teaching set:
Track: research-in-action
User needs: designing graph analyses, choosing null models
Build query-driven motif workflows with statistical controls and reproducible execution.
How to learn it: Define graph hypotheses, run null-model comparisons, and report supported versus unsupported claims clearly.
Teaching set:
Track: research-in-action
User needs: finding reliable resources, maintaining citation hygiene
Curate methods, datasets, and tools with metadata completeness and explicit limitations.
How to learn it: Use consistent metadata and quality checks so references are reusable, comparable, and trustworthy.
Teaching set: