Blog

Session Based Test Management: Complete SBTM Guide

Published on
February 6, 2026
Virtuoso QA
Guest Author

Master Session Based Test Management to structure exploratory testing. Learn charters, time-boxing, session reports, and debriefing for measurable results.

Exploratory testing discovers defects that scripted approaches miss, but its unstructured nature makes it difficult to manage, measure, and scale. Session Based Test Management (SBTM) brings discipline to exploratory testing without sacrificing the creativity that makes it effective. This guide explains how SBTM works, when to apply it, and how modern AI native testing platforms complement session based approaches. Whether you are formalizing existing exploratory practices or introducing structured testing to your organization, SBTM provides the framework for accountable, measurable exploration.

What is Session Based Test Management?

Session Based Test Management is a methodology that structures exploratory testing into time boxed sessions with defined charters, documented activities, and measurable outcomes. Developed by Jonathan and James Bach in the early 2000s, SBTM addresses the primary criticism of exploratory testing: that it lacks accountability and repeatability.

The Core Concept

SBTM organizes testing into sessions, typically 60 to 120 minutes in duration, during which a tester explores a defined area of the application guided by a charter. The charter describes what to test and what to look for without prescribing specific steps.

After each session, the tester documents findings in a session report that captures:

  • What was tested
  • What was discovered
  • How time was spent
  • What questions emerged

This documentation enables management oversight, knowledge sharing, and progress tracking while preserving tester autonomy in how they conduct the exploration.

SBTM vs Traditional Exploratory Testing

Unstructured exploratory testing lets testers investigate freely without constraints. While this freedom enables creative discovery, it creates challenges:

  • Accountability: How do you know what was tested?
  • Coverage: How do you track which areas received attention?
  • Progress: How do you measure testing completion?
  • Communication: How do you explain testing activities to stakeholders?

SBTM answers these questions through structure without eliminating the cognitive engagement that makes exploratory testing effective. Testers still decide how to test; they simply document their decisions and findings.

When to Use Session Based Test Management

SBTM excels in situations where:

  • Requirements are incomplete or evolving
  • Application areas are unfamiliar or poorly documented
  • Previous testing has missed defects despite passing scripted tests
  • Testers need freedom to follow their instincts
  • Management requires visibility into testing activities

SBTM complements rather than replaces scripted testing. Organizations typically use SBTM for new feature exploration, risk areas, and usability investigation while maintaining automated regression coverage for stable functionality.

Key Components of Session Based Test Management (SBTM)

Session Based Test Management relies on four interconnected components that transform unstructured exploration into manageable, measurable testing activities. Each component serves a specific purpose while supporting the others. Understanding how these elements work together enables effective implementation and maximises the value SBTM delivers.

1. Test Charters

A charter defines the mission for a testing session. Effective charters are specific enough to provide direction but broad enough to allow exploration. Too narrow and testers miss important discoveries adjacent to their path. Too broad and sessions lack focus, producing shallow coverage across many areas rather than deep investigation of important ones.

Charter Structure

A well formed charter includes:

  • Target: What area, feature, or functionality to explore
  • Resources: Tools, data, environments, or documentation to use
  • Information: What to look for or learn

Example charters:

  • "Explore the checkout process using various payment methods to discover usability issues and edge cases"
  • "Investigate the user profile editing functionality with international characters to identify data handling problems"
  • "Test the search feature with complex queries to find performance and relevance issues"

Charter Development

Charters emerge from multiple sources:

  • Risk analysis identifying areas warranting investigation
  • User stories requiring validation
  • Bug reports suggesting deeper exploration
  • Team hunches about problematic areas
  • Coverage gaps in existing test suites

Involve testers in charter development. Their domain knowledge and testing intuition improve charter quality and increase engagement during sessions. Testers who help create charters feel ownership over the testing mission rather than simply following instructions.

2. Time Boxing

Time boxing means conducting testing sessions within fixed durations. Rather than testing until finished or until interrupted, testers commit to focused exploration for a predetermined period. This simple constraint produces significant benefits for both individual effectiveness and organizational management.

Sessions have defined duration, typically between 45 and 120 minutes. Time boxing provides several benefits:

  • Focus: Limited time encourages concentrated attention
  • Measurement: Completed sessions provide progress metrics
  • Planning: Predictable durations enable scheduling
  • Freshness: Breaks between sessions prevent fatigue

The optimal session length depends on application complexity, tester experience, and organizational context. Start with 90 minute sessions and adjust based on results.

Session Interruptions

Real world testing involves interruptions: meetings, questions from colleagues, urgent issues. SBTM accounts for this through session metrics that distinguish between:

  • Session Time: Total elapsed time
  • On Charter Time: Time spent on the charter mission
  • Opportunity Time: Time spent on relevant but unplanned activities
  • Setup Time: Time spent on preparation and configuration

Tracking these categories provides accurate productivity measurement and identifies environmental factors affecting testing effectiveness.

3. Session Reports

Session reports document what happened during testing. They capture the testing story, what was explored, what was discovered, and what questions arose. Unlike scripted test results that indicate pass or fail, session reports preserve the context and reasoning behind testing decisions.


Reports serve multiple purposes. They provide evidence of testing activity for stakeholders. They enable knowledge transfer when testers change assignments. They support debugging when developers need to reproduce issues. They create an audit trail demonstrating due diligence.

Session reports document what happened during testing. Unlike scripted test results that indicate pass or fail, session reports capture the testing story.

Report Contents

Standard session reports include:

  • Charter: The mission that guided the session
  • Tester: Who conducted the session
  • Duration: Start time, end time, and time allocation
  • Data Files: Test data, configurations, and inputs used
  • Test Notes: What was tested and how
  • Bugs: Defects discovered with severity and status
  • Issues: Problems encountered during testing (environment, access, tools)
  • Questions: Items requiring clarification or follow up

Note Taking During Sessions

Effective session documentation requires note taking during testing, not reconstruction afterward. Approaches include:

  • Text notes in a dedicated application
  • Screen recordings with verbal commentary
  • Screenshots with annotations
  • Mind maps capturing exploration paths

The goal is sufficient detail to understand what happened without documentation overhead that slows exploration.

4. Debriefing

Debriefing is where individual testing sessions become organizational knowledge. A test lead or manager meets with testers to discuss session outcomes, share findings, and plan next steps. This conversation transforms isolated exploration into coordinated learning.

Without debriefing, session reports sit in repositories unread. Testers discover the same issues repeatedly. Insights remain trapped in individual minds. The investment in exploratory testing produces far less value than it should. Debriefing unlocks that value through structured conversation.

Debrief Structure

Effective debriefs cover:

  • Summary: Brief overview of session activities
  • Findings: Bugs, issues, and observations
  • Coverage: What was explored and what remains
  • Learnings: Insights about the application or testing approach
  • Next Steps: Follow up actions and future sessions

Debriefs should feel like conversations, not interrogations. The goal is understanding and improvement, not performance evaluation.

Debrief Benefits

Regular debriefing delivers:

  • Early visibility into defects and risks
  • Knowledge transfer between testers
  • Coaching opportunities for skill development
  • Course correction when sessions miss important areas
  • Recognition of tester contributions

Organizations that skip debriefing lose much of SBTM's value. The conversation transforms individual sessions into organizational learning.

CTA Banner

Implementing Session Based Test Management

Session Based Test Management Implementation

Step 1: Define Your Charter Library

Begin with a collection of charters covering critical application areas. Sources for initial charters include:

  • Existing test cases translated into exploratory missions
  • Risk assessments identifying areas requiring investigation
  • User journey maps highlighting key workflows
  • Historical defect data pointing to problem areas

Organize charters by application area, risk level, or testing phase. A searchable library enables testers to select appropriate sessions and managers to track coverage.

Step 2: Establish Session Parameters

Define standard parameters for your organization:

  • Session Duration: How long sessions should last (recommend starting with 90 minutes)
  • Documentation Requirements: What session reports must include
  • Debrief Frequency: When and how often debriefs occur
  • Metrics Tracked: What measurements matter for your context

Document these parameters so all team members apply them consistently. Adjust based on experience, but avoid excessive customization that undermines comparability.

Step 3: Train Your Team

SBTM success requires testers who understand both the methodology and effective exploration techniques. Training should cover:

  • SBTM concepts and rationale
  • Charter interpretation and execution
  • Exploration heuristics and techniques
  • Documentation practices
  • Debrief participation

Pair inexperienced testers with veterans for initial sessions. Observation and coaching accelerate skill development more effectively than classroom instruction alone.

Step 4: Execute and Iterate

Begin with a pilot covering a limited application area. Learn from early sessions:

  • Are charters appropriately scoped?
  • Does session duration work for your context?
  • Is documentation capturing useful information?
  • Are debriefs productive?

Refine your approach based on pilot results before scaling across the organization.

Measuring SBTM Effectiveness

Measuring SBTM effectiveness requires both quantitative metrics and qualitative assessment. Quantitative metrics track measurable outputs like session counts and defect rates. Qualitative assessment evaluates factors that numbers cannot capture, such as bug significance and team learning. Together, these approaches provide a complete picture of testing value.

1. Quantitative Metrics

SBTM enables measurements impossible with unstructured exploration:

Sessions Completed: Volume of testing performed Coverage by Charter: Which areas received attention Bugs per Session: Defect discovery rate Time Allocation: How testers spend their time Session Productivity: Ratio of testing to setup and interruption time

Track these metrics over time to identify trends and improvement opportunities.

2. Qualitative Assessment

Numbers alone do not capture SBTM value. Qualitative factors include:

  • Bug Quality: Are discovered defects significant or trivial?
  • Knowledge Generation: Is the team learning about the application?
  • Risk Reduction: Are critical areas receiving appropriate attention?
  • Team Development: Are testers improving their skills?

Balance quantitative tracking with qualitative evaluation for complete assessment.

Common SBTM Pitfalls and How to Avoid Them

SBTM implementation looks straightforward but contains subtle traps that undermine effectiveness. Organizations new to the approach frequently encounter these challenges. Recognising common pitfalls helps teams avoid them and realise SBTM's full potential.

  • Over Documentation: Session reports become so detailed that documentation consumes more time than testing. Focus on essential information only.
  • Charter Rigidity: Testers feel constrained to charter scope even when exploration reveals important tangents. Allow flexibility within reason.
  • Metric Fixation: Managers optimize for session counts rather than defect discovery. Remember that metrics indicate, they do not define success.
  • Debrief Neglect: Time pressure leads to skipped debriefs. Protect debrief time; it multiplies session value.

Integrating SBTM with Automated Testing

Many teams treat exploratory testing and test automation as separate activities. In practice, they work best when integrated. Exploratory sessions uncover scenarios that automation should cover permanently. Automation results reveal gaps where exploration adds value. Understanding this relationship maximises the return from both approaches.

1. Automation Informs Exploration

Automated test results highlight areas warranting exploration:

  • Features where automated tests pass but users report issues, indicating gaps between test coverage and real user behaviour
  • Code changes affecting areas without corresponding test updates
  • Integration points with limited automated coverage
  • New functionality awaiting automation

Use automation coverage reports to identify exploration targets. Charter sessions specifically for areas where automation provides insufficient confidence.

2. Exploration Informs Automation

Session discoveries become automation candidates:

  • Bugs found during exploration indicate scenarios needing regression coverage
  • Effective exploration paths suggest test case designs
  • Edge cases discovered during sessions deserve automated validation

AI native test platforms accelerate converting exploration findings into automated tests. Natural Language Programming enables testers to describe discovered scenarios in plain English, which the platform translates into executable tests.

3. AI Augmented Exploration

Modern testing platforms enhance exploratory testing through AI capabilities:

  • Intelligent Element Identification: Explore without worrying about precise selectors. Describe elements in natural language and let AI locate them accurately.
  • Real Time Feedback: Live Authoring shows test execution as steps are written, enabling immediate verification of exploration paths.
  • AI Root Cause Analysis: When exploration discovers failures, AI powered diagnostics accelerate understanding through automatic capture of screenshots, network traffic, and console logs.
  • Journey Summaries: AI assistants can summarize exploration findings, helping testers document sessions efficiently.

Virtuoso QA complements SBTM by removing technical barriers to exploration. Testers focus on discovering defects rather than fighting with tooling.

4. From Exploration to Regression

The most valuable exploration findings deserve permanent coverage. The workflow:

  1. Explorer discovers a defect scenario during SBTM session
  2. Developer fixes the defect
  3. Tester creates automated regression test covering the scenario
  4. Test joins the regression suite for continuous validation

AI native platforms make step 3 nearly instant. Instead of translating exploration into code, testers describe what they explored:

"Navigate to checkout, apply discount code SAVE20, verify 20% discount appears, complete purchase with PayPal, verify order confirmation shows discounted total"

The platform creates an executable test from this description, immediately adding the scenario to regression coverage.

SBTM for Enterprise Applications

Enterprise applications present unique challenges for session based testing.

1. Complex Business Processes

Enterprise workflows span multiple screens, roles, and systems. Exploration of an order to cash process might involve:

  • Sales representative creating a quote
  • Manager approving the quote
  • Customer accepting terms
  • Warehouse fulfilling the order
  • Finance processing invoicing
  • Collections managing payment

Session charters for enterprise applications should specify which process segment to explore and which role perspective to assume.

2. Test Data Requirements

Enterprise exploration requires realistic data. Exploring insurance claim processing with trivial test data misses complexity that real claims introduce.

AI powered data generation creates contextually appropriate test data on demand. Testers specify data characteristics and receive realistic records supporting meaningful exploration.

3. Integration Points

Enterprise applications integrate with numerous external systems. Exploration should include:

  • Behavior when integrated systems respond normally
  • Handling of slow responses and timeouts
  • Error conditions and failure recovery
  • Data transformation accuracy

Charter sessions specifically for integration exploration. These areas often harbor defects that pure UI testing misses.

Transform Your Testing Practice

Session Based Test Management brings accountability to exploratory testing without sacrificing the creativity that makes exploration valuable. Organizations implementing SBTM discover defects that scripted testing misses while maintaining the visibility and measurement that management requires.

Modern AI native platforms like Virtuoso QA amplify SBTM effectiveness. Natural language test authoring converts exploration findings into automated coverage instantly. Self healing tests ensure regression coverage remains stable. AI Root Cause Analysis accelerates defect investigation when sessions discover problems.

Virtuoso QA enables testers to focus on what humans do best: creative exploration and critical thinking. The platform handles element identification, test execution, and diagnostic data capture automatically.

CTA Banner

Frequently Asked Questions

How long should a testing session last?
Most organizations find 60 to 120 minutes optimal, with 90 minutes as a common starting point. Shorter sessions may not allow sufficient depth; longer sessions risk fatigue and diminishing returns. Experiment to find the duration that works for your context, considering application complexity and tester experience.
Can SBTM work for remote teams?
Yes. Session documentation and debriefing adapt to remote collaboration through video calls, shared documents, and screen recording. Some organizations find remote debriefs more focused than in person conversations. The key is maintaining debrief discipline regardless of format.
How many sessions should a tester complete per day?
Cognitive intensity limits productive sessions. Most testers complete two to four sessions daily with breaks between. Pushing for higher volume typically reduces session quality. Focus on session effectiveness rather than maximizing count
What is the difference between SBTM and ad hoc testing?
Ad hoc testing lacks structure, documentation, and accountability. Testers explore without defined missions or recorded findings. SBTM adds charter guidance, time boxing, documentation, and debriefing while preserving exploration flexibility. This structure enables management oversight and progress measurement that ad hoc testing cannot provide.
Should we replace scripted testing with SBTM?
No. SBTM complements scripted testing rather than replacing it. Automated regression tests efficiently validate known scenarios across multiple configurations. SBTM excels at discovering unknown scenarios, exploring new features, and investigating areas where scripted tests provide insufficient confidence. Most organizations benefit from both approaches.

How do you track SBTM coverage?

Coverage tracking requires mapping charters to application areas and recording session completion. Visualization tools showing which areas have received exploration and which remain unexplored help managers allocate sessions strategically. Unlike code coverage metrics for automated tests, SBTM coverage is approximate, reflecting areas explored rather than code paths exercised.

Subscribe to our Newsletter

Codeless Test Automation

Try Virtuoso QA in Action

See how Virtuoso QA transforms plain English into fully executable tests within seconds.

Try Interactive Demo
Schedule a Demo
Calculate Your ROI