Overview
This comprehensive Taxonomy maps all key components of the ISDLC framework, providing a structured reference for understanding how phases, pillars, artifacts, roles, and tools interconnect.
📚 How to Use This Taxonomy
Use this as a reference guide when planning ISDLC adoption, designing workflows, or mapping your existing processes to the ISDLC model. Each section provides detailed breakdowns with practical examples.
Artifacts & Documentation by Phase
Phase 1: Intend
| Artifact | Purpose | Key Contents | Acceptance Criteria |
|---|---|---|---|
| Project Charter | Captures business objectives and scope | Goals, scope, success metrics, stakeholders, ROI | Goals clear, scope defined, stakeholders signed off |
| Requirements Spec | Formalizes functional/non-functional requirements | User stories, use cases, constraints, quality criteria | Each requirement has criteria, traceable to need |
| Context Glossary | Defines domain terms and data dictionary | Domain models, key terms, business rules | Entries cover all domain-critical terms |
| Risk & Compliance Register | Lists risks, mitigations, compliance rules | Risk descriptions, likelihood, impact, mitigation plans | High risks have mitigation, aligned with audit requirements |
Phase 2: Structure
| Artifact | Purpose | Key Contents | Acceptance Criteria |
|---|---|---|---|
| Architecture Design Doc | Describes system and software architecture | Component diagrams, data flows, API interfaces, tech stack | All components defined, architecture vetted |
| Solution Blueprint | Integration plan of subsystems and services | Sequence flows, integration points, external interfaces | Interfaces defined, no major integration gaps |
| Interface / API Spec | Detailed API endpoints and contracts | Endpoint definitions, inputs/outputs, error codes | Specifications complete, validated by sample call or mock |
| Operational Constraints Doc | Captures performance and security requirements | SLAs, deployment targets, security requirements, failover | All SLAs and security controls documented, risk-reviewed |
Phase 3: Develop
| Artifact | Purpose | Key Contents | Acceptance Criteria |
|---|---|---|---|
| Code & Documentation | Actual source code and in-line documentation | Generated classes/files, code comments, README files | Code follows style guide, comments explain key logic |
| Unit & Integration Test Plan | Outlines test cases and coverage goals | Test scenarios, coverage targets, automation notes | Automated tests for all requirements, high coverage |
| CI/CD Pipeline Config | Scripts for building, testing, deploying | Build scripts, deployment scripts, environment configs | Pipelines run successfully, deployments reproducible |
| Code Review Checklist | Ensures code meets standards and security | Style guide, security checklist (OWASP), design patterns | Checklist fully ticked off before merge |
Phase 4: Launch
| Artifact | Purpose | Key Contents | Acceptance Criteria |
|---|---|---|---|
| Release Plan / Checklist | Details release schedule and procedures | Release schedule, rollback plan, communication plan | All steps complete in test, approvals recorded |
| Deployment Manifest | Final deployment scripts and configs | Infrastructure definitions, parameters, versions | Infrastructure deployed in staging, metrics baseline captured |
| Runbook / Operations Guide | Operating procedures and troubleshooting | Launch procedures, common troubleshooting steps | Critical runbook entries verified via drill or tabletop |
| Test Report Summary | Results of final validation testing | Pass/fail summary, defect log, performance vs SLAs | All critical tests passed, no unresolved high defects |
Phase 5: Continuously Evolve
| Artifact | Purpose | Key Contents | Acceptance Criteria |
|---|---|---|---|
| Monitoring Dashboard | Live metrics and alerts for system health | Dashboard views (graphs of CPU, requests/sec, error rate) | Dashboards updating, alerts set for threshold breaches |
| Incident/Postmortem Reports | Documents incidents and root causes | Incident timeline, analysis, action items | Follow-up actions tracked, root causes addressed |
| Improvement Backlog | New features or fixes for next cycles | Feature/user story list with priority, rationale | Top items aligned with business impact and capacity |
| Architecture Review Log | Records architecture changes over time | Change description, reason, approved by | All changes documented, validated against design constraints |
Roles & Agents by Phase
| Phase | Human Roles | AI Agents & Automation |
|---|---|---|
| Intend | Product Owner, Business Analysts, Stakeholders | Requirements extraction agents, chatbots, sentiment analysis, market research automation |
| Structure | Solution Architects, Development Leads, Security Architects | Design assistance agents, architecture diagram generators, API spec creators, threat modeling tools |
| Develop | Software Engineers (reviewers/guides), QA Engineers, Tech Leads | Code generation agents (Copilot, Q, Claude), test generation, documentation writers, dependency managers |
| Launch | DevOps Engineers, Site Reliability Engineers, Release Managers | Deployment orchestration, container builders, infrastructure provisioners, smoke testers |
| Continuously Evolve | SREs, Operations Teams, Data Analysts, Product Managers | Auto-remediation agents, anomaly detection, performance optimizers, capacity planners |
🤝 Shift in Responsibilities
In ISDLC, human roles shift from execution to oversight and validation. Developers review AI-generated code rather than writing it from scratch. Architects validate AI-proposed designs rather than creating diagrams manually. This shift requires training and cultural adaptation.
Tools & Technologies
AI & Code Generation
| Tool Category | Examples | Primary Usage |
|---|---|---|
| AI Coding Assistants | GitHub Copilot, Amazon Q Developer, Claude Code, Cursor | Code generation, test creation, documentation, refactoring |
| Large Language Models | GPT-4, Claude, Gemini | Requirements analysis, design alternatives, documentation |
| Vector Databases | Pinecone, Weaviate, Chroma | Context storage for RAG pipelines |
Development & Testing
| Tool Category | Examples | Primary Usage |
|---|---|---|
| CI/CD Platforms | GitHub Actions, GitLab CI, Jenkins, CircleCI | Automated build, test, deploy pipelines |
| Security Scanning | Snyk, SonarQube, Checkmarx, GitGuardian | SAST, SCA, secret scanning, vulnerability detection |
| Test Automation | Pytest, Jest, Selenium, Cypress | Unit, integration, and E2E testing |
Infrastructure & Operations
| Tool Category | Examples | Primary Usage |
|---|---|---|
| Infrastructure as Code | Terraform, CloudFormation, Pulumi | Declarative infrastructure provisioning |
| Container Orchestration | Kubernetes, Docker Swarm, ECS | Container deployment and management |
| Monitoring & Observability | Prometheus, Datadog, New Relic, Splunk | Metrics, logs, traces, alerting |
| Cloud Platforms | AWS, Azure, GCP | Hosting, storage, managed services |
Collaboration & Knowledge Management
| Tool Category | Examples | Primary Usage |
|---|---|---|
| Project Management | Jira, Linear, Azure DevOps | Requirements tracking, sprint planning, backlog management |
| Documentation | Confluence, Notion, GitBook | Knowledge base, architecture docs, runbooks |
| Modeling Tools | Lucidchart, draw.io, PlantUML | Architecture diagrams, data flows, sequence diagrams |
Key Workflows & Processes
1. Spec-Driven Development
Philosophy: "Spec first, code second" – write detailed specifications before coding with AI.
Process:
- Write comprehensive requirements with acceptance criteria
- Create detailed API specifications and data models
- AI uses specs as contracts to generate implementation
- Humans validate that generated code meets spec
Benefits: Clear intent definition, consistent implementations, easier validation
2. Mob Elaboration & Construction
Mob Elaboration (Inception): Team jointly reviews AI-generated plans and refines them in real-time.
Mob Construction (Development): AI coding agent and developers pair-program, with AI producing code stubs and humans steering decisions.
Benefits: Shared understanding, rapid iteration, continuous learning
3. Human-in-the-Loop (HITL) Checkpoints
Mandatory review points:
- Post-Intend: Human approval of final requirements document
- Post-Structure: Sign-off on architecture decisions and design
- During Develop: Pull request review for each AI-generated change
- Pre-Launch: Final production deployment approval
Purpose: Ensure accuracy, safety, accountability, and maintain audit trail
4. Continuous Integration & Deployment
Pipeline stages:
- Code commit triggers CI build
- Run AI-generated unit and integration tests
- Execute automated security scans (SAST/SCA/secrets)
- Build and containerize application
- Deploy to staging environment
- Run smoke tests and validation
- Auto-deploy to production (with approval gate if required)
Goal: Zero-touch deployment with quality gates
5. RAG Context Pipelines
Implementation:
- Index all documentation and code in vector database
- At each AI agent invocation, retrieve relevant context chunks
- Feed context (requirements, API specs, past decisions) to AI
- AI generates output informed by complete project context
- Continuously update context store with new artifacts and learnings
Purpose: Prevent AI from "forgetting" earlier decisions, maintain consistency
6. Continuous Feedback Loop
Flow:
- Production monitoring collects metrics, logs, user feedback
- AI agents analyze data for patterns, anomalies, opportunities
- Insights automatically generate backlog items
- Prioritized improvements feed back into Intend phase
- Cycle repeats continuously
Result: Self-improving system with data-driven evolution
Metrics & KPIs
Phase-Level KPIs
| Phase | Key Metrics |
|---|---|
| Intend | Number of requirements identified, ambiguity rate, stakeholder satisfaction, requirements change rate |
| Structure | Architecture review time, design revision count, risk assessment coverage, traceability completeness |
| Develop | Code churn, PR review time, code coverage %, defect injection rate, % tests auto-generated |
| Launch | Deployment frequency, mean time to deploy (MTTD), deployment success rate, mean time to restore (MTTR) |
| Continuously Evolve | System uptime %, incidents per period, incident resolution time, automated remediations, backlog aging |
Pillar-Level KPIs
| Pillar | Key Metrics |
|---|---|
| Governance | Policy violations detected, audit cycle time, compliance scan pass rate |
| Context | Context retrieval hit rate, prompt response latency, knowledge hits per query, concept drift detection |
| Evaluation | Test pass rate, time to detect/close bugs, production vs staging defects |
| Automation | % tasks automated, pipeline success rate, manual intervention frequency |
| Observability | Instrumentation coverage %, time to detect anomalies, alert-to-incident ratio, latency percentiles |
Security & Governance Controls
Controls by Phase
Intend Phase Controls
- Define security requirements (encryption, least-privilege)
- Classify data (PCI, PII, PHI) upfront
- Create AI ethics checklist and guardrails
- Establish audit logging for key decisions
Structure Phase Controls
- Perform early threat modeling and architecture reviews
- Use automated policy tools on proposed designs
- Validate cloud services against enterprise standards
- Document data flows to identify sensitive areas
Develop Phase Controls
- Run SAST on every code commit
- Execute SCA for dependency vulnerabilities
- Use secrets scanning to catch credential leaks
- Enforce coding standards via linters
- Store all AI prompts and outputs for traceability
- Require human review before merging AI code
Launch Phase Controls
- Include IaC and container scanning in pipelines
- Validate runtime configs match security reviews
- Use canary/blue-green with monitoring
- Maintain approval workflow for production changes
Evolve Phase Controls
- Continuous monitoring for security events
- Automated incident response procedures
- Regular security audits and penetration testing
- Proactive vulnerability patching
⚠️ Critical Security Considerations
- AI Hallucinations: Always require human review of AI-generated security-critical code
- Data Leakage: Never feed proprietary code into external LLMs without precautions
- Access Control: Limit AI tool access to sensitive data using role-based permissions
- Audit Trail: Maintain immutable logs of all AI decisions and human approvals