Skip to main content

Maintenance Task Prioritization Framework

Status: Active
Version: 1.0.0
Last Updated: 2026-01-03
Epic: Epic 7 - Codebase Maintenance and Review
Story: Story 1 - Codebase Maintenance Tasks
Task: E7:S01:T02 - Create maintenance task prioritization framework
Related: Kanban Governance Policy, Framework Health Metrics, IDE-Flagged Issues


Executive Summary

This document defines the comprehensive prioritization framework for maintenance tasks in the AI Dev Kit project. The framework provides systematic criteria and processes for prioritizing maintenance work, ensuring that critical issues are addressed promptly while maintaining codebase health efficiently.

Key Principles:

  • Multi-Dimensional Prioritization: Tasks are prioritized based on severity, impact, and effort
  • Context-Aware: Prioritization considers project context, dependencies, and strategic goals
  • Actionable: Framework provides clear, actionable guidance for task prioritization
  • Kanban Integration: Seamlessly integrates with Kanban framework for task tracking
  • Continuous Improvement: Framework evolves based on maintenance patterns and outcomes

Maintenance Task Categories

Category 1: IDE-Flagged Issues

Source: IDE/editor warnings, errors, and informational messages

Subcategories:

  • Errors: Critical issues that prevent code execution or compilation
  • Warnings: Issues that may cause problems but don't prevent execution
  • Info: Informational messages suggesting improvements or best practices

Characteristics:

  • High volume, continuous stream
  • Varying severity and impact
  • Can accumulate if not addressed regularly
  • Often quick to fix individually

Category 2: Code Quality Issues

Source: Code analysis tools, linting, static analysis

Subcategories:

  • Code Smells: Patterns indicating potential problems
  • Complexity Issues: High cyclomatic complexity, deep nesting
  • Style Violations: Inconsistent formatting, naming conventions
  • Security Vulnerabilities: Security-related code issues

Characteristics:

  • Systematic patterns across codebase
  • May require refactoring effort
  • Impact on maintainability and readability
  • Often benefits from batch processing

Category 3: Technical Debt

Source: Accumulated shortcuts, workarounds, deferred improvements

Subcategories:

  • Architectural Debt: Structural issues requiring refactoring
  • Documentation Debt: Missing or outdated documentation
  • Test Debt: Missing tests, low coverage
  • Dependency Debt: Outdated dependencies, security issues

Characteristics:

  • Long-term impact on maintainability
  • May require significant effort to address
  • Strategic importance varies
  • Benefits from planned improvement cycles

Category 4: Framework and Infrastructure Maintenance

Source: Framework updates, infrastructure improvements, tooling

Subcategories:

  • Framework Updates: Updating dependencies, frameworks
  • Infrastructure Improvements: CI/CD, tooling, automation
  • Performance Optimization: Performance improvements, optimizations
  • Compatibility Maintenance: Keeping compatibility with dependencies

Characteristics:

  • Strategic importance
  • May require coordination across projects
  • Benefits from planned maintenance windows
  • Often has dependencies on other work

Prioritization Dimensions

Dimension 1: Severity

Definition: The criticality of the issue and its potential to cause problems

Severity Levels:

Critical (Severity: 5)

  • Definition: Issues that prevent code execution, compilation, or deployment
  • Examples:
    • Syntax errors
    • Import errors
    • Type errors that prevent compilation
    • Critical security vulnerabilities
  • Response Time: Immediate (within 1 development cycle)
  • Impact: Blocks development or deployment

High (Severity: 4)

  • Definition: Issues that cause runtime errors or significant problems
  • Examples:
    • Runtime exceptions
    • Logic errors
    • High-severity security warnings
    • Breaking API changes
  • Response Time: Urgent (within 1-2 development cycles)
  • Impact: Causes failures or significant problems

Medium (Severity: 3)

  • Definition: Issues that may cause problems or reduce quality
  • Examples:
    • Code quality warnings
    • Performance warnings
    • Deprecation warnings
    • Medium-severity security issues
  • Response Time: Important (within 2-4 development cycles)
  • Impact: Reduces quality or may cause problems

Low (Severity: 2)

  • Definition: Issues that are minor or cosmetic
  • Examples:
    • Style violations
    • Minor code quality suggestions
    • Informational messages
    • Low-severity security suggestions
  • Response Time: As time permits (within 4+ development cycles)
  • Impact: Minor quality improvements

Informational (Severity: 1)

  • Definition: Suggestions or best practices
  • Examples:
    • Code style suggestions
    • Best practice recommendations
    • Optimization opportunities
  • Response Time: Optional (when convenient)
  • Impact: Nice-to-have improvements

Dimension 2: Impact

Definition: The scope and effect of addressing or not addressing the issue

Impact Levels:

Critical Impact (Impact: 5)

  • Definition: Affects core functionality, multiple users, or critical systems
  • Scope:
    • Affects multiple projects or frameworks
    • Blocks critical workflows
    • Affects production systems
    • Security or data integrity issues
  • Examples:
    • Framework breaking changes
    • Critical security vulnerabilities
    • Data loss risks
    • System-wide failures

High Impact (Impact: 4)

  • Definition: Affects significant functionality or user experience
  • Scope:
    • Affects major features
    • Impacts developer experience significantly
    • Affects multiple components
    • Performance degradation
  • Examples:
    • Major feature bugs
    • Significant performance issues
    • Developer workflow blockers
    • Framework compatibility issues

Medium Impact (Impact: 3)

  • Definition: Affects moderate functionality or quality
  • Scope:
    • Affects specific features or components
    • Impacts developer experience moderately
    • Affects code quality or maintainability
    • Moderate performance issues
  • Examples:
    • Feature-specific bugs
    • Code quality issues
    • Moderate performance problems
    • Documentation gaps

Low Impact (Impact: 2)

  • Definition: Affects minor functionality or quality
  • Scope:
    • Affects edge cases or minor features
    • Minimal impact on developer experience
    • Cosmetic or minor quality issues
    • Minor performance improvements
  • Examples:
    • Edge case bugs
    • Minor code quality issues
    • Style inconsistencies
    • Minor documentation updates

Minimal Impact (Impact: 1)

  • Definition: Negligible or no practical impact
  • Scope:
    • Affects rarely-used code paths
    • No practical impact on users
    • Purely cosmetic
    • Theoretical improvements
  • Examples:
    • Unused code cleanup
    • Purely cosmetic changes
    • Theoretical optimizations
    • Minor style suggestions

Dimension 3: Effort

Definition: The estimated time and complexity required to address the issue

Effort Levels:

Minimal Effort (Effort: 1)

  • Definition: Quick fixes, simple changes (≤ 1 hour)
  • Examples:
    • Fixing typos
    • Simple style fixes
    • Adding missing imports
    • Quick documentation updates
  • Time Estimate: < 1 hour
  • Complexity: Low

Low Effort (Effort: 2)

  • Definition: Straightforward changes (1-4 hours)
  • Examples:
    • Simple bug fixes
    • Minor refactoring
    • Documentation updates
    • Adding simple tests
  • Time Estimate: 1-4 hours
  • Complexity: Low-Medium

Medium Effort (Effort: 3)

  • Definition: Moderate changes requiring some analysis (4-16 hours)
  • Examples:
    • Moderate refactoring
    • Feature improvements
    • Complex bug fixes
    • Significant documentation work
  • Time Estimate: 4-16 hours (0.5-2 days)
  • Complexity: Medium

High Effort (Effort: 4)

  • Definition: Significant changes requiring substantial work (16-40 hours)
  • Examples:
    • Major refactoring
    • Architecture changes
    • Large feature additions
    • Comprehensive documentation
  • Time Estimate: 16-40 hours (2-5 days)
  • Complexity: High

Very High Effort (Effort: 5)

  • Definition: Major changes requiring extensive work (40+ hours)
  • Examples:
    • Major architecture overhauls
    • Complete framework rewrites
    • Large-scale migrations
    • Comprehensive system redesigns
  • Time Estimate: 40+ hours (5+ days)
  • Complexity: Very High

Prioritization Score Calculation

Base Priority Score

Formula:

Priority Score = (Severity × 0.4) + (Impact × 0.4) + (Effort × 0.2)

Weighting Rationale:

  • Severity (40%): Criticality of the issue
  • Impact (40%): Scope and effect of the issue
  • Effort (20%): Inverse weight (lower effort = higher priority, adjusted in calculation)

Effort Adjustment:

  • Effort is inverted in calculation: (6 - Effort) × 0.2
  • Lower effort tasks get higher priority scores
  • This ensures quick wins are prioritized appropriately

Final Formula:

Priority Score = (Severity × 0.4) + (Impact × 0.4) + ((6 - Effort) × 0.2)

Priority Score Ranges

Critical Priority (Score: 4.0 - 5.0):

  • Must address immediately
  • Blocks development or deployment
  • Critical security or functionality issues

High Priority (Score: 3.0 - 3.9):

  • Should address within 1-2 development cycles
  • Significant impact on functionality or quality
  • Important for codebase health

Medium Priority (Score: 2.0 - 2.9):

  • Should address within 2-4 development cycles
  • Moderate impact on functionality or quality
  • Contributes to codebase health

Low Priority (Score: 1.0 - 1.9):

  • Address as time permits
  • Minor impact on functionality or quality
  • Nice-to-have improvements

Minimal Priority (Score: 0.0 - 0.9):

  • Optional, address when convenient
  • Negligible impact
  • Purely cosmetic or theoretical

Contextual Adjustments

Strategic Importance

Adjustment Factor: ±0.5 to Priority Score

High Strategic Importance (+0.5):

  • Aligns with project goals or roadmap
  • Enables future work or features
  • Addresses technical debt that blocks progress
  • Improves developer experience significantly

Low Strategic Importance (-0.5):

  • Not aligned with current priorities
  • Deferred or low-value work
  • Cosmetic improvements only
  • Theoretical benefits only

Dependency Considerations

Adjustment Factor: ±0.3 to Priority Score

Blocks Other Work (+0.3):

  • Prevents other tasks from proceeding
  • Required for dependent features
  • Blocks critical workflows

Depends on Other Work (-0.3):

  • Cannot proceed until other work is complete
  • Lower urgency until dependencies resolved

Risk Factors

Adjustment Factor: ±0.2 to Priority Score

High Risk (+0.2):

  • Security vulnerabilities
  • Data integrity risks
  • System stability concerns
  • Compliance issues

Low Risk (-0.2):

  • Low-risk changes
  • Well-tested code paths
  • Isolated changes

Prioritization Process

Step 1: Task Identification

Sources:

  • IDE-flagged issues (errors, warnings, info)
  • Code analysis tools
  • Manual code reviews
  • User feedback
  • Technical debt tracking

Documentation:

  • Create task in Kanban framework
  • Document issue details
  • Capture context and examples

Step 2: Initial Assessment

Assess Each Dimension:

  1. Severity: Determine criticality level (1-5)
  2. Impact: Determine scope and effect (1-5)
  3. Effort: Estimate time and complexity (1-5)

Document Assessment:

  • Record severity, impact, and effort ratings
  • Note reasoning for each rating
  • Capture any special considerations

Step 3: Calculate Base Priority Score

Apply Formula:

Priority Score = (Severity × 0.4) + (Impact × 0.4) + ((6 - Effort) × 0.2)

Record Score:

  • Document calculated priority score
  • Note priority level (Critical/High/Medium/Low/Minimal)

Step 4: Apply Contextual Adjustments

Review Context:

  • Strategic importance
  • Dependency considerations
  • Risk factors

Apply Adjustments:

  • Add/subtract adjustment factors
  • Document reasoning for adjustments
  • Finalize adjusted priority score

Step 5: Kanban Integration

Assign to Kanban:

  • Create task in appropriate Epic/Story
  • Set priority based on calculated score
  • Assign MoSCOW classification:
    • Must Have (M): Critical Priority (4.0-5.0)
    • Should Have (S): High Priority (3.0-3.9)
    • Could Have (C): Medium Priority (2.0-2.9)
    • Ongoing (O): Perpetual maintenance tasks (not scored, tracked separately)
    • Won't Have (W): Low/Minimal Priority (<2.0)

Track Progress:

  • Update task status as work progresses
  • Document resolution and outcomes
  • Learn from prioritization accuracy

Kanban Integration

MoSCOW Classification

Must Have (M):

  • Priority Score: 4.0 - 5.0
  • Critical Priority tasks
  • Must be completed before release
  • Blocks development or deployment

Should Have (S):

  • Priority Score: 3.0 - 3.9
  • High Priority tasks
  • Should be completed soon
  • Important for quality or functionality

Could Have (C):

  • Priority Score: 2.0 - 2.9
  • Medium Priority tasks
  • Could be completed if time permits
  • Contributes to codebase health

Ongoing (O):

  • Perpetual maintenance tasks
  • Tasks that never complete (e.g., workflow maintenance, changelog management)
  • Tracked but don't block other work
  • Not scored using priority framework (separate category)

Won't Have (W):

  • Priority Score: < 2.0
  • Low/Minimal Priority tasks
  • Deferred or optional
  • Nice-to-have improvements

Epic/Story Assignment

Epic 7: Codebase Maintenance and Review

  • Story 1: Codebase Maintenance Tasks
    • IDE-flagged issues (E7:S01:T01 - Perpetual)
    • Prioritization framework (E7:S01:T02)
    • Workflow processes (E7:S01:T03)
    • Kanban integration (E7:S01:T04)

Task Creation:

  • Create tasks in appropriate Epic/Story
  • Include priority score in task description
  • Document severity, impact, and effort ratings
  • Link to related issues or context

Priority Tracking

Task Metadata:

  • Priority Score: Calculated priority score
  • Severity: Severity rating (1-5)
  • Impact: Impact rating (1-5)
  • Effort: Effort rating (1-5)
  • MoSCOW: MoSCOW classification

Progress Tracking:

  • Track priority score accuracy
  • Learn from prioritization outcomes
  • Refine framework based on experience

Examples

Example 1: Critical Syntax Error

Task: Fix Python syntax error in framework-health-monitor.py (line 45)

Assessment:

  • Severity: 5 (Critical - prevents execution)
  • Impact: 5 (Critical - blocks framework health monitoring)
  • Effort: 1 (Minimal - simple syntax fix)

Calculation:

Priority Score = (5 × 0.4) + (5 × 0.4) + ((6 - 1) × 0.2)
= 2.0 + 2.0 + 1.0
= 5.0

Result: Critical Priority (5.0) - Must Have (M)

Action: Address immediately

Example 2: Code Quality Warning

Task: Reduce cyclomatic complexity in validate_version_bump.py (currently 15, target: 10)

Assessment:

  • Severity: 2 (Low - warning, doesn't prevent execution)
  • Impact: 3 (Medium - affects code maintainability)
  • Effort: 3 (Medium - requires refactoring)

Calculation:

Priority Score = (2 × 0.4) + (3 × 0.4) + ((6 - 3) × 0.2)
= 0.8 + 1.2 + 0.6
= 2.6

Result: Medium Priority (2.6) - Could Have (C)

Action: Address within 2-4 development cycles

Example 3: Security Vulnerability

Task: Update dependency with known security vulnerability (CVE-2024-XXXX)

Assessment:

  • Severity: 5 (Critical - security vulnerability)
  • Impact: 4 (High - affects security posture)
  • Effort: 2 (Low - dependency update)

Contextual Adjustments:

  • Risk Factor: +0.2 (High risk - security issue)

Calculation:

Base Score = (5 × 0.4) + (4 × 0.4) + ((6 - 2) × 0.2)
= 2.0 + 1.6 + 0.8
= 4.4
Adjusted Score = 4.4 + 0.2 = 4.6

Result: Critical Priority (4.6) - Must Have (M)

Action: Address urgently (within 1 development cycle)

Example 4: Documentation Update

Task: Update outdated documentation in framework-health-metrics.md

Assessment:

  • Severity: 1 (Informational - documentation issue)
  • Impact: 2 (Low - affects documentation quality)
  • Effort: 1 (Minimal - documentation update)

Calculation:

Priority Score = (1 × 0.4) + (2 × 0.4) + ((6 - 1) × 0.2)
= 0.4 + 0.8 + 1.0
= 2.2

Result: Medium Priority (2.2) - Could Have (C)

Action: Address as time permits


Best Practices

Regular Prioritization Reviews

Frequency:

  • Weekly: Review high-priority tasks
  • Monthly: Comprehensive prioritization review
  • Quarterly: Framework effectiveness review

Process:

  • Review pending tasks
  • Re-assess priority scores if context changed
  • Adjust priorities based on new information
  • Learn from completed tasks

Batch Processing

Group Similar Tasks:

  • Batch similar issues for efficient resolution
  • Group by category (IDE errors, code quality, etc.)
  • Process batches during maintenance windows

Benefits:

  • Improved efficiency
  • Consistent fixes
  • Better context understanding
  • Reduced overhead

Continuous Improvement

Track Outcomes:

  • Monitor prioritization accuracy
  • Learn from prioritization decisions
  • Refine framework based on experience
  • Update criteria as patterns emerge

Framework Evolution:

  • Review framework effectiveness regularly
  • Adjust weighting if needed
  • Add new criteria as patterns emerge
  • Document lessons learned

Integration with Other Frameworks

Framework Health Monitoring

Connection:

  • Maintenance tasks affect framework health scores
  • Health metrics inform prioritization
  • Prioritization supports health improvement

Usage:

  • Use health metrics to identify maintenance needs
  • Prioritize tasks that improve health scores
  • Track health improvement from maintenance work

Release Workflow (RW)

Connection:

  • Maintenance tasks may be included in releases
  • Prioritization guides release planning
  • Releases may create maintenance tasks

Usage:

  • Include high-priority maintenance in releases
  • Address critical issues before releases
  • Document maintenance work in changelogs

Update Kanban Workflow (UKW)

Connection:

  • UKW may identify maintenance needs
  • Prioritization guides UKW task creation
  • Maintenance status tracked in Kanban

Usage:

  • UKW identifies maintenance opportunities
  • Create prioritized tasks from UKW findings
  • Track maintenance progress in Kanban

References

  • Kanban Governance Policy: packages/frameworks/kanban/policies/kanban-governance-policy.md
  • Framework Health Metrics: docs/architecture/standards-and-adrs/framework-health-metrics.md
  • Epic 7: docs/project-management/kanban/epics/Epic-7/Epic-7.md
  • Story 1: docs/project-management/kanban/epics/Epic-7/Story-001-codebase-maintenance-tasks.md

Decision Record

Decision: Implement multi-dimensional prioritization framework with severity, impact, and effort dimensions, weighted scoring, and Kanban integration.

Rationale:

  • Provides systematic, objective prioritization
  • Considers multiple factors for balanced decisions
  • Integrates seamlessly with Kanban framework
  • Supports continuous improvement

Alternatives Considered:

  • Single-dimension prioritization (too simplistic)
  • Unweighted dimensions (doesn't reflect importance)
  • Manual-only prioritization (inconsistent)
  • Tool-based prioritization (adds dependencies)

Consequences:

  • Requires initial assessment effort
  • Framework may need refinement based on experience
  • Provides foundation for systematic maintenance
  • Enables data-driven prioritization decisions