HED test suite user guide¶
Introduction¶
What is HED?¶
HED (Hierarchical Event Descriptors) is a framework for systematically describing events and experimental metadata in machine-actionable form. HED provides:
Controlled vocabulary for annotating experimental data and events
Standardized infrastructure enabling automated analysis and interpretation
Integration with major neuroimaging standards (BIDS and NWB)
For more information, visit the HED project homepage and the resources page.
What is the HED test suite?¶
The HED test suite (hed-tests repository) is the official collection of JSON test cases for validating HED validator implementations. It provides:
Comprehensive test coverage: 136 test cases covering 33 error codes
Multiple test types: String, sidecar, event, and combo tests
AI-friendly metadata: Explanations, common causes, and correction strategies
Cross-platform consistency: Single source of truth for all validators
Machine-readable specification: Tests document expected validation behavior
Purpose¶
The test suite serves three primary purposes:
Validator validation: Ensure Python, JavaScript, and future implementations produce consistent results
Specification documentation: Provide executable examples of HED validation rules
AI training: Enable AI systems to understand HED validation through structured examples
Getting started¶
Clone the repository¶
Get the test suite from GitHub:
git clone https://github.com/hed-standard/hed-tests.git
cd hed-tests
Repository structure¶
hed-tests/
├── json_test_data/ # All test data
│ ├── validation_test_data/ # 25 validation error test files
│ ├── schema_test_data/ # 17 schema error test files
│ ├── validation_tests.json # Consolidated validation tests
│ ├── validation_code_dict.json # Maps error codes to test names
│ ├── validation_testname_dict.json # Maps test names to error codes
│ ├── schema_tests.json # Consolidated schema tests
│ ├── schema_code_dict.json # Maps error codes to test names
│ └── schema_testname_dict.json # Maps test names to error codes
├── src/
│ ├── scripts/ # Utility scripts
│ └── schemas/ # JSON schema for test validation
├── docs/ # Documentation (this site)
└── tests/ # Test utilities
Test files are organized by error code in the json_test_data directory. Tests that are relevant to validation of HED annotations are in the validation_test_data subdirectory, while the tests that are relevant only to HED schema development are organized in the schema_test_data subdirectory.
Test structure¶
Tests for a specific error code are in a single file named by the most likely HED error code and must conform to a JSON schema available in src/schemas/test_schema.json.
A validator might give a different error code
Because the exact error code that a validator assigns to an error depends heavily on the order in which it evaluates types of errors, a given test may produce a different error code.
Each test has a alt_codes key that gives acceptable alternative error codes.
Validating the tests¶
Ensure test files conform to the JSON schema:
# Validate a single test file
python src/scripts/validate_test_structure.py json_test_data/validation_test_data/TAG_INVALID.json
# Validate all tests
python src/scripts/validate_test_structure.py json_test_data/validation_test_data
python src/scripts/validate_test_structure.py json_test_data/schema_test_data
Consolidate tests¶
Generate consolidated test files and lookup dictionaries:
python src/scripts/consolidate_tests.py
# Creates:
# - validation_tests.json (all validation tests)
# - validation_code_dict.json (error codes to test names)
# - validation_testname_dict.json (test names to error codes)
# - schema_tests.json (all schema tests)
# - schema_code_dict.json (error codes to test names)
# - schema_testname_dict.json (test names to error codes)
The consolidation process creates both combined test files and lookup dictionaries for efficient test discovery.
Check test coverage¶
Analyze test coverage statistics:
python src/scripts/check_coverage.py
# Output:
# HED Test Suite Coverage Report
# =====================================
# Total test files: 42
# Total test cases: 136
# Error codes covered: 33
# ...
Generate test index¶
Create a searchable test index:
python src/scripts/generate_test_index.py
# Creates: docs/test_index.md
Test format specification¶
Test format overview¶
Each JSON test file in the HED Test Suite follows a standardized structure to ensure consistent validation testing across all HED validator implementations.
File structure¶
Test files are located in:
json_test_data/validation_test_data/- Tests for validation error codesjson_test_data/schema_test_data/- Tests for schema validation errors
Each file contains an array of test case objects.
Test case schema¶
[
{
"error_code": "TAG_INVALID",
"alt_codes": ["PLACEHOLDER_INVALID"],
"name": "tag-invalid-in-schema",
"description": "Human-readable description of what is being tested",
"warning": false,
"schema": "8.4.0",
"error_category": "semantic",
"common_causes": ["List of common causes"],
"explanation": "Detailed explanation for AI/developers",
"correction_strategy": "How to fix the issue",
"correction_examples": [
{
"wrong": "Invalid HED string",
"correct": "Corrected HED string",
"explanation": "Why the correction works"
}
],
"definitions": [
"(Definition/Acc/#, (Acceleration/# m-per-s^2, Red))"
],
"tests": {
"string_tests": {...},
"sidecar_tests": {...},
"event_tests": {...},
"combo_tests": {...}
}
}
]
Required fields¶
error_code¶
Type: string
The HED error code being tested. Must match the filename (e.g., TAG_INVALID.json).
Example: "TAG_INVALID"
name¶
Type: string
A unique, descriptive identifier for the test case. Use lowercase with hyphens.
Example: "tag-invalid-in-schema"
description¶
Type: string
Human-readable description of what the test case validates.
Example: "Test that tags not in schema are detected as invalid"
schema¶
Type: string
HED schema version for this test case.
Example: "8.4.0"
tests¶
Type: object
Container for all test data. Must include at least one test type.
Optional fields¶
alt_codes¶
Type: array[string]
Alternative error codes that might be reported for this condition. Useful when multiple validators use different codes for the same error.
Example: ["PLACEHOLDER_INVALID"]
warning¶
Type: boolean (default: false)
Whether this test should produce a warning instead of an error.
error_category¶
Type: string
Semantic category of the error. One of:
"syntax"- Basic syntax errors (parentheses, commas, etc.)"semantic"- Tag meaning errors (invalid tags, wrong values)"value"- Value-specific errors (units, placeholders)"consistency"- Internal consistency errors (definition usage)"uniqueness"- Duplicate detection errors"schema"- Schema structure errors
common_causes¶
Type: array[string]
List of common reasons this error occurs. Used by AI systems to understand typical mistakes.
Example:
[
"Typo in tag name",
"Using deprecated tag",
"Tag from wrong schema version"
]
explanation¶
Type: string
Detailed explanation of the error for AI systems and developers.
Example: "Tags must exist in the active HED schema. Extensions are allowed but the base tag must be valid."
correction_strategy¶
Type: string
General approach to fixing this error.
Example: "Check the tag against the schema browser at hedtags.org. Use the correct tag path or a valid extension."
correction_examples¶
Type: array[object]
Concrete examples showing wrong → correct transformations.
Structure:
[
{
"wrong": "Invalidtag",
"correct": "Event",
"explanation": "Use a tag that exists in the schema"
}
]
definitions¶
Type: array[string]
HED definition strings required for the test case. These are evaluated before the test strings.
Example:
[
"(Definition/Acc/#, (Acceleration/# m-per-s^2, Red))"
]
Test types¶
string_tests¶
Tests for raw HED strings.
Structure:
{
"fails": [
"Red, Invalidtag",
"Blue, Typo/Tag"
],
"passes": [
"Red, Blue",
"Event, Sensory-event"
]
}
fails: Array of HED strings that should produce the errorpasses: Array of HED strings that should NOT produce the error
sidecar_tests¶
Tests for BIDS JSON sidecar files.
Structure:
{
"fails": [
{
"sidecar": {
"event_type": {
"HED": {
"stimulus": "Invalidtag"
}
}
}
}
],
"passes": [
{
"sidecar": {
"event_type": {
"HED": {
"stimulus": "Sensory-event"
}
}
}
}
]
}
Each item is an object with a sidecar property containing a BIDS sidecar JSON structure.
event_tests¶
Tests for tabular event data with HED annotations.
Structure:
{
"fails": [
[
["onset", "duration", "HED"],
[4.5, 0, "Red, Invalidtag"]
]
],
"passes": [
[
["onset", "duration", "HED"],
[4.5, 0, "Red, Blue"]
]
]
}
Each test is a 2D array:
First row: Column headers (must include at least one HED column)
Subsequent rows: Event data
combo_tests¶
Combined sidecar + event tests (realistic BIDS scenarios).
Structure:
{
"fails": [
{
"sidecar": {
"event_type": {
"HED": {
"show": "Sensory-event"
}
}
},
"events": [
["onset", "duration", "event_type", "HED"],
[4.5, 0, "show", "Invalidtag"]
]
}
],
"passes": [...]
}
Combines a sidecar definition with event data that uses categorical values from the sidecar.
Validation rules¶
Required structure¶
At least one test: Every test case must have at least one test type with data
Both fails and passes: Each test type should include both failing and passing examples
Valid JSON: All test data must be valid JSON
Consistent error_code: Must match the filename
Naming conventions¶
File names:
ERROR_CODE.json(uppercase, underscores)Test names:
error-code-specific-scenario(lowercase, hyphens)Error codes: Match official HED specification
AI metadata¶
For AI training and code generation, include:
explanation: Why this error occurscommon_causes: Typical mistakescorrection_strategy: How to fixcorrection_examples: Concrete before/after examples
Example test file¶
Here’s a complete example from TAG_INVALID.json:
[
{
"error_code": "TAG_INVALID",
"alt_codes": [],
"name": "tag-invalid-basic",
"description": "Basic test for tags not in the schema",
"warning": false,
"schema": "8.4.0",
"error_category": "semantic",
"common_causes": [
"Typo in tag name",
"Using a tag from a different schema version",
"Attempting to use custom tags without proper extension syntax"
],
"explanation": "Tags must exist in the active HED schema. Each tag path must be found in the schema vocabulary, though extensions to valid tags are allowed using the extension syntax.",
"correction_strategy": "Verify the tag exists in the schema using the schema browser at hedtags.org. Check for typos, ensure you're using the correct schema version, or use proper extension syntax for custom additions.",
"correction_examples": [
{
"wrong": "Invalidtag",
"correct": "Event",
"explanation": "Use a tag that exists in the schema"
},
{
"wrong": "Red, Sensory/Invalidtag",
"correct": "Red, Sensory-event",
"explanation": "The full tag path must be valid"
}
],
"definitions": [],
"tests": {
"string_tests": {
"fails": [
"Invalidtag",
"Red, Invalidtag",
"Sensory/Invalidtag"
],
"passes": [
"Red",
"Event",
"Sensory-event"
]
}
}
}
]
Lookup dictionaries¶
In addition to the individual test files, consolidated lookup dictionaries enable efficient test discovery.
validation_code_dict.json — maps error codes to test case names:
{
"TAG_INVALID": [
"tag-invalid-in-schema",
"tag-extension-invalid-duplicate"
],
"UNITS_INVALID": [
"units-invalid-for-unit-class",
"units-invalid-si-units"
]
}
validation_testname_dict.json — maps test case names to all error codes they validate:
{
"tag-invalid-in-schema": ["TAG_INVALID", "PLACEHOLDER_INVALID"],
"character-invalid-non-printing": ["CHARACTER_INVALID", "TAG_INVALID"]
}
schema_code_dict.json and schema_testname_dict.json provide equivalent lookups for schema tests.
Usage:
import json
with open('json_test_data/validation_code_dict.json') as f:
code_dict = json.load(f)
tests_for_tag_invalid = code_dict['TAG_INVALID']
with open('json_test_data/validation_testname_dict.json') as f:
name_dict = json.load(f)
codes = name_dict['tag-invalid-in-schema']
Dictionaries are automatically regenerated by src/scripts/consolidate_tests.py.
Validator integration guide¶
Integration overview¶
The HED Test Suite provides standardized JSON test cases that all HED validators should pass. By integrating these tests, you ensure your validator:
Matches the specification: Validates HED according to the official rules
Maintains consistency: Produces the same results as other validators
Prevents regressions: Catches changes in validation behavior
Documents behavior: Tests serve as executable specifications
Getting the tests¶
Method 1: Git clone (recommended)¶
Clone the repository to access all tests:
git clone https://github.com/hed-standard/hed-tests.git
cd hed-tests
Update periodically to get new tests:
git pull origin main
Method 2: Download ZIP¶
Download the latest tests as a ZIP file:
https://github.com/hed-standard/hed-tests/archive/refs/heads/main.zip
Method 3: Submodule¶
Add as a git submodule to your validator repository:
git submodule add https://github.com/hed-standard/hed-tests.git tests/hed-tests
git submodule update --init --recursive
Integration approaches¶
Approach 1: Direct test execution¶
Read test files and execute them directly in your test framework.
Python example (unittest):
import json
import unittest
from pathlib import Path
class TestHedValidation(unittest.TestCase):
"""Test HED validation using the test suite."""
@classmethod
def setUpClass(cls):
"""Load all test cases once before running tests."""
cls.test_cases = []
test_dir = Path("hed-tests/json_test_data/validation_test_data")
for test_file in test_dir.glob("*.json"):
with open(test_file) as f:
cases = json.load(f)
for case in cases:
cls.test_cases.append((test_file.stem, case))
def test_validation_suite(self):
"""Run each test case from the suite."""
for error_code, test_case in self.test_cases:
with self.subTest(error_code=error_code, test_name=test_case["name"]):
schema = load_schema(test_case["schema"])
# Test failing strings
if "string_tests" in test_case.get("tests", {}):
for hed_string in test_case["tests"]["string_tests"].get("fails", []):
issues = validate_hed_string(hed_string, schema)
self.assertTrue(
any(issue.code == error_code for issue in issues),
f"Expected {error_code} for: {hed_string}"
)
# Test passing strings
for hed_string in test_case["tests"]["string_tests"].get("passes", []):
issues = validate_hed_string(hed_string, schema)
self.assertFalse(
any(issue.code == error_code for issue in issues),
f"Unexpected {error_code} for: {hed_string}"
)
if __name__ == '__main__':
unittest.main()
JavaScript Example (Jest):
const fs = require('fs');
const path = require('path');
const { validateHedString } = require('./validator');
describe('HED Validation Tests', () => {
const testDir = 'hed-tests/json_test_data/validation_test_data';
const files = fs.readdirSync(testDir);
files.forEach(filename => {
const errorCode = path.basename(filename, '.json');
const testCases = JSON.parse(
fs.readFileSync(path.join(testDir, filename), 'utf8')
);
describe(errorCode, () => {
testCases.forEach(testCase => {
test(testCase.name, () => {
const schema = loadSchema(testCase.schema);
// Test failing strings
const fails = testCase.tests?.string_tests?.fails || [];
fails.forEach(hedString => {
const issues = validateHedString(hedString, schema);
expect(issues.some(i => i.code === errorCode)).toBe(true);
});
// Test passing strings
const passes = testCase.tests?.string_tests?.passes || [];
passes.forEach(hedString => {
const issues = validateHedString(hedString, schema);
expect(issues.some(i => i.code === errorCode)).toBe(false);
});
});
});
});
});
});
Approach 2: Generate test cases¶
Generate test files in your native test format from the JSON.
Example: Convert JSON to Python unittest files:
import json
from pathlib import Path
def generate_test_file(json_path, output_path):
"""Generate a Python test file from JSON test cases."""
with open(json_path) as f:
test_cases = json.load(f)
error_code = json_path.stem
test_code = f'''
import unittest
from hed_validator import validate_hed_string, load_schema
class Test{error_code}(unittest.TestCase):
'''
for i, case in enumerate(test_cases):
test_code += f'''
def test_{case["name"].replace("-", "_")}(self):
"""Test: {case["description"]}"""
schema = load_schema("{case["schema"]}")
'''
if "string_tests" in case.get("tests", {}):
for hed_string in case["tests"]["string_tests"].get("fails", []):
test_code += f'''
issues = validate_hed_string("{hed_string}", schema)
self.assertTrue(any(i.code == "{error_code}" for i in issues))
'''
for hed_string in case["tests"]["string_tests"].get("passes", []):
test_code += f'''
issues = validate_hed_string("{hed_string}", schema)
self.assertFalse(any(i.code == "{error_code}" for i in issues))
'''
with open(output_path, 'w') as f:
f.write(test_code)
Approach 3: Test report comparison¶
Run tests and compare your results against a reference implementation.
def compare_validation_results(test_case, reference_issues, your_issues):
"""Compare validation results against reference implementation."""
error_code = test_case["error_code"]
# Check if both found (or didn't find) the error
ref_found = any(i.code == error_code for i in reference_issues)
your_found = any(i.code == error_code for i in your_issues)
if ref_found != your_found:
return {
"test": test_case["name"],
"expected": ref_found,
"actual": your_found,
"status": "MISMATCH"
}
return {"status": "MATCH"}
Test types Implementation¶
String tests¶
Simplest test type - raw HED strings.
def run_string_tests(test_case, schema):
"""Execute string_tests from a test case."""
error_code = test_case["error_code"]
string_tests = test_case["tests"].get("string_tests", {})
# Test strings that should fail
for hed_string in string_tests.get("fails", []):
issues = validate_hed_string(hed_string, schema)
assert any(i.code == error_code for i in issues), \
f"Expected {error_code} for: {hed_string}"
# Test strings that should pass
for hed_string in string_tests.get("passes", []):
issues = validate_hed_string(hed_string, schema)
assert not any(i.code == error_code for i in issues), \
f"Unexpected {error_code} for: {hed_string}"
Sidecar tests¶
Test BIDS JSON sidecar validation.
def run_sidecar_tests(test_case, schema):
"""Execute sidecar_tests from a test case."""
error_code = test_case["error_code"]
sidecar_tests = test_case["tests"].get("sidecar_tests", {})
for sidecar_obj in sidecar_tests.get("fails", []):
sidecar = sidecar_obj["sidecar"]
issues = validate_sidecar(sidecar, schema)
assert any(i.code == error_code for i in issues)
for sidecar_obj in sidecar_tests.get("passes", []):
sidecar = sidecar_obj["sidecar"]
issues = validate_sidecar(sidecar, schema)
assert not any(i.code == error_code for i in issues)
Event tests¶
Test tabular event data.
def run_event_tests(test_case, schema):
"""Execute event_tests from a test case."""
error_code = test_case["error_code"]
event_tests = test_case["tests"].get("event_tests", {})
for event_data in event_tests.get("fails", []):
headers = event_data[0]
rows = event_data[1:]
issues = validate_events(headers, rows, schema)
assert any(i.code == error_code for i in issues)
for event_data in event_tests.get("passes", []):
headers = event_data[0]
rows = event_data[1:]
issues = validate_events(headers, rows, schema)
assert not any(i.code == error_code for i in issues)
Combo tests¶
Combined sidecar + event tests (most realistic).
def run_combo_tests(test_case, schema):
"""Execute combo_tests from a test case."""
error_code = test_case["error_code"]
combo_tests = test_case["tests"].get("combo_tests", {})
for combo in combo_tests.get("fails", []):
sidecar = combo["sidecar"]
headers = combo["events"][0]
rows = combo["events"][1:]
issues = validate_bids_dataset(sidecar, headers, rows, schema)
assert any(i.code == error_code for i in issues)
for combo in combo_tests.get("passes", []):
sidecar = combo["sidecar"]
headers = combo["events"][0]
rows = combo["events"][1:]
issues = validate_bids_dataset(sidecar, headers, rows, schema)
assert not any(i.code == error_code for i in issues)
Handling definitions¶
Some tests require definitions to be loaded before validation:
def run_test_with_definitions(test_case, schema):
"""Run test case with definition pre-loading."""
# Load definitions first
definitions = test_case.get("definitions", [])
definition_dict = {}
for def_string in definitions:
name, definition = parse_definition(def_string)
definition_dict[name] = definition
# Now run tests with definitions available
for hed_string in test_case["tests"]["string_tests"]["fails"]:
issues = validate_hed_string(
hed_string,
schema,
definitions=definition_dict
)
# ... assertions
Error code mapping¶
Your validator might use different error codes. Use the alt_codes field:
def check_error_match(issue, expected_code, alt_codes):
"""Check if an issue matches expected code or alternates."""
if issue.code == expected_code:
return True
return issue.code in alt_codes
Example from test case:
{
"error_code": "TAG_INVALID",
"alt_codes": ["PLACEHOLDER_INVALID"],
...
}
CI/CD integration¶
Add test suite validation to your CI pipeline:
GitHub Actions Example:
name: HED Test Suite
on: [push, pull_request]
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Clone HED test suite
run: |
git clone https://github.com/hed-standard/hed-tests.git
- name: Set up Python
uses: actions/setup-python@v4
with:
python-version: '3.10'
- name: Install dependencies
run: |
pip install -e .
- name: Run HED test suite
run: |
python -m unittest tests.test_hed_validation -v
Example integrations¶
hed-python¶
# tests/test_validation_suite.py
import json
import unittest
from pathlib import Path
class TestValidationSuite(unittest.TestCase):
def test_validation_suite(self):
test_dir = Path("hed-tests/json_test_data/validation_test_data")
for test_file in test_dir.glob("*.json"):
with self.subTest(test_file=test_file.name):
with open(test_file) as f:
test_cases = json.load(f)
# ... run tests
hed-javascript¶
// tests/validation.test.js
const testData = require('./hed-tests/json_test_data/validation_tests.json');
describe('HED Validation Suite', () => {
testData.forEach(testCase => {
// ... run tests
});
});
Using lookup dictionaries¶
import json
with open('hed-tests/json_test_data/validation_code_dict.json') as f:
code_dict = json.load(f)
tag_tests = code_dict.get('TAG_INVALID', [])
print(f"TAG_INVALID is validated by {len(tag_tests)} tests")
with open('hed-tests/json_test_data/validation_tests.json') as f:
all_tests = json.load(f)
filtered_tests = [t for t in all_tests if t['name'] in tag_tests]
Reporting issues¶
If your validator produces different results:
Verify the test case: Ensure you’re parsing the JSON correctly
Check schema version: Make sure you’re using the correct schema
Review the specification: Check the HED specification for clarification
File an issue: Report discrepancies at https://github.com/hed-standard/hed-tests/issues
Include:
Test case name and error code
Expected vs actual behavior
Your validator implementation (Python, JavaScript, etc.)
Schema version used
Integration best practices¶
Run all tests: Don’t cherry-pick - run the entire suite
Automate execution: Integrate tests into CI/CD
Track coverage: Monitor which tests pass/fail over time
Update regularly: Pull latest tests periodically
Report discrepancies: Help improve the test suite
Use schema versions: Respect the schema version in each test
Handle all test types: Support string, sidecar, event, and combo tests
Error code categories¶
Tests are organized by error code, mapping to validation rules in the HED specification.
Syntax errors¶
CHARACTER_INVALID- Invalid characters in tagsCOMMA_MISSING- Missing required commasPARENTHESES_MISMATCH- Unmatched parenthesesTAG_EMPTY- Empty tag elements
Semantic errors¶
TAG_INVALID- Tags not in schemaTAG_EXTENDED- Tag extension warnings (warning)TAG_EXTENSION_INVALID- Invalid tag extensionsVALUE_INVALID- Invalid tag valuesUNITS_INVALID- Invalid or missing units
Definition errors¶
DEFINITION_INVALID- Malformed definitionsDEF_INVALID- Invalid definition usageDEF_EXPAND_INVALID- Definition expansion errors
Sidecar errors¶
SIDECAR_INVALID- Invalid sidecar structureSIDECAR_BRACES_INVALID- Curly brace errorsSIDECAR_KEY_MISSING- Missing required keys
Schema errors¶
SCHEMA_ATTRIBUTE_INVALID- Invalid schema attributesSCHEMA_ATTRIBUTE_VALUE_INVALID- Invalid schema attribute valuesSCHEMA_CHARACTER_INVALID- Invalid characters in schemaSCHEMA_DEPRECATION_ERROR- Deprecation errorsSCHEMA_DUPLICATE_NODE- Duplicate schema nodesSCHEMA_HEADER_INVALID- Invalid schema headersSCHEMA_LIBRARY_INVALID- Invalid library referencesSCHEMA_LOAD_FAILED- Schema loading failuresSCHEMA_SECTION_MISSING- Missing required schema sectionsWIKI_DELIMITERS_INVALID- Invalid wiki delimiters in schema
Temporal errors¶
TEMPORAL_TAG_ERROR- Temporal tag issues (Onset/Offset/Inset)
Other¶
ELEMENT_DEPRECATED- Deprecated element usage (warning)PLACEHOLDER_INVALID- Invalid placeholder usageTAG_EXPRESSION_REPEATED- Repeated tag expressionsTAG_GROUP_ERROR- Tag group structure errorsTAG_NAMESPACE_PREFIX_INVALID- Invalid namespace prefixTAG_NOT_UNIQUE- Non-unique tag usageTAG_REQUIRES_CHILD- Tag requires child node
Test index¶
The complete, searchable test index with all 136 test cases is in test_index.md.
Support and contributing¶
HED resources¶
HED homepage: Project overview
HED specification: Formal validation rules
HED schemas: Vocabulary definitions
HED Python validator: Python implementation
HED JavaScript validator: JavaScript implementation
Getting help¶
Issues: GitHub Issues
Discussions: GitHub Discussions
Email: hed.maintainers@gmail.com
Contributing¶
See CONTRIBUTING.md for guidelines on adding new tests or improving existing ones.
End of User Guide