
Test cases are a set of conditions under which testers verify software functionality. With this guide, explore test case formats, examples, and templates.
A test case is a detailed set of conditions, actions, and expected results used to validate specific software functionality. It defines exactly what to test, how to test it, and what success looks like. Well-written test cases enable consistent, repeatable validation whether executed manually or automated. Traditional test case documentation requires extensive manual effort, becomes outdated quickly, and creates barriers between testers and stakeholders. AI-native testing platforms now enable teams to create executable test cases in natural language, eliminating documentation overhead while enabling business users to define validation scenarios without technical expertise, accelerating test creation by 80-90% while improving clarity and maintainability.
A test case is a specific set of conditions under which a tester determines whether software functions correctly. It documents the inputs, execution steps, preconditions, and expected outcomes required to validate a particular aspect of application functionality.
Unique identifier enabling reference and tracking. Use clear naming conventions like TC-001, TC-LOGIN-01, or USER-REG-001.
Example: TC-CHECKOUT-CC-01 (Test Case for Checkout using Credit Card, first scenario)
Concise, descriptive summary of what the test validates. Use action-oriented language clearly stating the scenario.
Brief explanation of test purpose and scope. Provides context beyond the title.
Example: "Verify that registered users can successfully complete purchases using valid credit card payment, receive order confirmation, and see order in purchase history."
Conditions that must exist before test execution begins. Includes system state, user accounts, test data, and configuration requirements.
Examples:
Specific data values used during test execution. Eliminates ambiguity and ensures consistent test execution.
Examples:
Sequential actions the tester performs. Each step describes one specific action in clear, unambiguous language.
Format:
Explicit description of correct system behavior for each step. Defines success criteria so testers know whether tests passed or failed.
Examples:
Space for documenting what actually happened during execution. Completed during test execution, not during test case creation.
Test execution outcome: Pass, Fail, Blocked, or Skipped.
Definitions:
Business importance indicating testing sequence. Typically High, Medium, or Low (or P1, P2, P3).
Priority Factors:
Links to user stories, requirements documents, or acceptance criteria this test validates. Enables traceability.
Example: Requirements: US-123, AC-456
Traditional format with detailed step-by-step instructions. Most common format for manual test execution.
Test Case ID: TC-LOGIN-001
Test Case Name: User Login with Valid Credentials
Priority: High
Related Requirements: US-101 (User Authentication)
Preconditions:
- User account exists with username "user@example.com" and password "ValidPass123"
- Application is accessible and running
- User is logged out
Test Data:
- Username: user@example.com
- Password: ValidPass123
Test Steps:
Step 1: Navigate to application URL (https://app.example.com)
Expected Result: Application homepage loads with "Login" button visible in top right corner
Step 2: Click "Login" button
Expected Result: Login page displays with username field, password field, and "Submit" button
Step 3: Enter username "user@example.com" in username field
Expected Result: Username field accepts input and displays entered value
Step 4: Enter password "ValidPass123" in password field
Expected Result: Password field accepts input and displays masked characters (dots or asterisks)
Step 5: Click "Submit" button
Expected Result:
- User redirects to dashboard page
- Welcome message displays: "Welcome, Test User"
- User menu shows logged-in state with user email
- Session token created (verify via browser developer tools if needed)
Actual Results: [To be filled during execution]
Status: [Pass/Fail/Blocked]
Executed By: _____________
Date: _____________
Notes: _____________
Uses Given-When-Then structure emphasizing behavior specification. Popular in Agile teams practicing BDD.
Feature: User Authentication
Scenario: Successful login with valid credentials
Given the user is on the login page
And the user has a registered account with username "user@example.com"
When the user enters username "user@example.com"
And the user enters password "ValidPass123"
And the user clicks the "Submit" button
Then the user should be redirected to the dashboard
And the welcome message should display "Welcome, Test User"
And the user menu should show the user email "user@example.com"
And a session token should be created
Priority: High
Requirements: US-101
Test Data: See attached data file
Lightweight format for exploratory testing sessions. Defines mission, time box, and areas to explore without prescriptive steps.
Charter: Explore checkout process for payment handling edge cases
Time Box: 90 minutes
Mission:
Investigate how the checkout process handles various payment scenarios beyond happy path, focusing on edge cases and unusual inputs.
Areas to Explore:
- Expired credit cards
- Declined transactions
- Network timeouts during payment
- Multiple rapid payment submissions
- Browser back button during payment processing
- Session timeout during checkout
- Invalid CVV codes
- International credit cards
- Alternative payment methods
Success Criteria:
- Document all edge cases discovered
- Log defects for any error handling issues
- Identify usability problems in error messages
- Verify security controls around payment data
Notes: [Documented during session]
Defects Found: [List defect IDs]
Questions Raised: [Areas requiring clarification]
Simplified format listing validations without detailed steps. Useful for smoke testing or experienced testers.
Feature: User Registration - Smoke Test Checklist
Test Conditions to Verify:
☐ Registration form loads with all required fields
☐ Email validation prevents invalid email formats
☐ Password strength indicator shows real-time feedback
☐ "Username already exists" error displays for duplicate usernames
☐ Successful registration creates user account in database
☐ Confirmation email sends within 60 seconds
☐ User can log in immediately after registration
☐ User profile page displays correct information
☐ Account appears in admin user management panel
Priority: Critical
Requirements: US-200 series
Execution Time: ~15 minutes
Write test cases in simple, unambiguous language anyone can understand. Avoid technical jargon unless writing for technical audiences.
Good: "Click the 'Add to Cart' button located below the product image"
Poor: "Trigger the onclick event handler for the DOM element with class 'btn-cart'"
Each test step should represent one action. Don't combine multiple actions into single steps.
Wrong: "Login and navigate to settings and update password"
Right:
Don't assume testers know what should happen. Explicitly state expected outcomes for each action.
Incomplete: "Click login button"
Complete: "Click login button → Expected: User redirects to dashboard page with welcome message displayed"
Provide specific, realistic test data rather than placeholders or generic values.
Vague: "Enter a valid email address"
Specific: "Enter email address: customer@example.com"
Adjust technical detail based on who executes tests. Manual testers need explicit instructions. Experienced testers can handle higher-level descriptions.
For Manual Testers: "Click the blue 'Submit' button at the bottom right of the form"
For Experienced Testers: "Submit the registration form"
Document everything that must be true before testing begins. Don't assume testers know setup requirements.
Example Preconditions:
Link every test case to specific requirements or user stories. This traceability ensures complete requirements coverage.
Each test should execute independently without depending on other tests running first. Avoid test dependencies creating fragile test suites.
Wrong: Test Case 2 assumes Test Case 1 already created a user account
Right: Test Case 2 explicitly creates required user account in preconditions or setup
Problem: "Check that the system works correctly"
Solution: "Verify order confirmation displays order number, total amount, estimated delivery date, and shipping address"
Problem: Step describes action without stating what should happen
Solution: Every step includes explicit expected result defining success
Problem: One test case validates login, profile update, and logout in single test
Solution: Create separate test cases for each scenario enabling targeted testing
Problem: "Enter valid credentials"
Solution: "Enter username: test@example.com, password: TestPass123"
Problem: Steps assume tester knows navigation, system quirks, or business rules
Solution: Document all information needed for successful test execution
Problem: Test cases filled with technical jargon incomprehensible to business stakeholders
Solution: Use plain language stakeholders understand while maintaining precision
Test Case: Domestic Wire Transfer - Happy Path
Preconditions:
- Customer logged into online banking
- Source account (Checking #1234) has available balance of $50,000
- Destination account verified and saved
Test Data:
- Source Account: Checking #1234 (Balance: $50,000)
- Destination Account: 987654321 (Routing: 021000021)
- Transfer Amount: $5,000
- Transfer Date: Same day
Steps:
1. Navigate to "Transfers" section
Expected: Transfer page displays with transfer types listed
2. Select "Wire Transfer"
Expected: Wire transfer form displays with source/destination fields
3. Select source account "Checking #1234"
Expected: Current balance displays ($50,000)
4. Enter destination account "987654321" and routing "021000021"
Expected: System validates routing number, displays bank name
5. Enter amount "$5,000"
Expected: System validates sufficient funds, displays fees ($25)
6. Review transfer details and confirm
Expected: Confirmation screen shows all details for review
7. Authorize with secure code
Expected: Wire processes successfully, confirmation number generates
8. Verify account balances update
Expected: Source account shows $45,025 ($5,000 + $25 fee deducted)
Pass Criteria: Transfer completes, balances update, confirmation received
Test Case: Nurse Administers Scheduled Medication
Preconditions:
- Nurse logged into EHR system
- Patient admitted with active orders
- Scheduled medication due for administration
Test Data:
- Patient: John Smith (MRN: 12345)
- Medication: Lisinopril 10mg oral daily
- Administration Time: 08:00
Steps:
1. Scan patient wristband
Expected: Patient record displays with active orders
2. Navigate to Medication Administration Record (MAR)
Expected: MAR displays scheduled medications for current shift
3. Select Lisinopril 10mg from scheduled list
Expected: Medication details display with barcode scan prompt
4. Scan medication barcode
Expected: System verifies 5 rights (right patient, drug, dose, route, time)
5. Document administration
Expected: System timestamps administration, updates MAR status
6. Verify MAR updates
Expected: Medication marked "Given" with nurse signature and timestamp
Clinical Safety Validation: System prevents wrong medication, wrong dose, wrong patient
Test Case: In-Store Return of Online Purchase
Preconditions:
- Customer purchased item online 5 days ago (Order #12345)
- Item delivered and in original packaging
- Return window is 30 days
Test Data:
- Order Number: 12345
- Item: Wireless Mouse (SKU: WM-001)
- Original Price: $49.99
- Return Reason: Changed mind
Steps:
1. Store associate scans order number or email lookup
Expected: Order details display with eligible return items
2. Associate selects item for return
Expected: Return policy displays (30-day window confirmed)
3. Customer selects return reason "Changed mind"
Expected: System validates item condition requirements
4. Associate inspects item and confirms condition
Expected: System generates return authorization
5. System processes refund to original payment method
Expected: Refund confirmation displays, estimated 5-7 business days
6. Customer receives return receipt
Expected: Receipt shows returned item, refund amount, timeline
Validation: Inventory updates, order status changes, refund processes correctly
Traditional test case documentation creates bottlenecks through manual writing, constant maintenance, and technical barriers limiting who can create tests.
AI-native platforms enable teams to write test cases in plain English without formal documentation structures or technical expertise.
Traditional Approach:
AI-Native Approach:
Verify customers can purchase products with saved credit cards
1. Login as existing customer
2. Add product to cart
3. Proceed to checkout
4. Select saved payment method
5. Confirm order
6. Verify order confirmation displays
7. Check email for confirmation
AI platform translates natural language into executable tests automatically. No formal templates. No technical barriers. Business users create tests.
AI-powered self-healing eliminates test case maintenance burden. When applications change, test cases adapt automatically without manual updates.
Traditional Challenge: Application changes require updating hundreds of test cases manually
AI-Native Solution: 95% self-healing accuracy means test cases continue working despite application changes
Test cases become living documentation automatically updated as tests execute. Results, screenshots, and execution evidence capture system behavior automatically.
Group related test cases by feature, user workflow, or business process. Logical organization enables efficient test selection and execution.
Store test cases in version control systems alongside code. Track changes, enable collaboration, and maintain history.
Schedule periodic reviews removing obsolete test cases, updating outdated scenarios, and identifying coverage gaps.
Link every test case to specific requirements ensuring complete validation coverage and supporting compliance audits.
Standardize test case IDs, names, and organization enabling team members to locate and reference tests easily.
Track which test cases find defects, which rarely execute, and which require frequent maintenance. Optimize test suites based on data.
Virtuoso QA eliminates traditional test case documentation overhead through natural language test creation enabling business users to define validation without technical expertise.
Describe test cases in plain English using Virtuoso QA's intuitive interface. AI translates descriptions into executable tests automatically.
As teams create test cases, Virtuoso QA validates steps against actual applications in real time. Incorrect steps highlighted immediately.
Virtuoso QA analyzes applications and suggests test steps automatically. Describe what to validate; StepIQ generates how to test it.
Build reusable test case components shared across teams. Common workflows become building blocks accelerating test case creation.
95% self-healing accuracy means test cases adapt automatically to application changes without maintenance.
Test scenarios are high-level descriptions of what to test (e.g., "User Login"). Test cases are detailed, step-by-step implementations of scenarios specifying exact actions, data, and expected results. One scenario typically generates multiple test cases covering happy paths, edge cases, and error conditions.
Coverage depends on application complexity and risk. Start with test cases covering critical user workflows and business-critical functionality. Typical applications require 500-5,000 test cases for comprehensive coverage. Focus on quality over quantity—well-designed test cases covering critical paths provide more value than exhaustive test cases for trivial functionality.
Detail level depends on audience and automation plans. Manual testers need detailed, explicit instructions. Automated tests can use higher-level descriptions. BDD-style test cases work well for collaboration with non-technical stakeholders. Choose detail level appropriate for your context.
Agile teams create test cases during sprint planning based on user stories and acceptance criteria. Use BDD format enabling collaboration with product owners and developers. Create automated test cases executing continuously in CI/CD pipelines. Update test cases as stories evolve during sprints.
Common tools include Jira (with Xray or Zephyr plugins), TestRail, Zephyr, qTest, and HP ALM. AI-native platforms like Virtuoso enable test case creation in natural language without separate test management tools. Choose tools integrating with your development workflow.
Yes. Well-written test cases translate directly into automated tests. Detailed test steps become automation script instructions. Expected results become automated assertions. AI-native platforms automate test cases directly from natural language descriptions without coding.
Update test cases when requirements change, application functionality evolves, or test cases become obsolete. With AI-powered self-healing, automated test cases update automatically when applications change, eliminating manual maintenance. Manual test cases require updates whenever application behavior changes.
Good test cases are clear (anyone can understand), specific (explicit actions and expected results), independent (no dependencies on other tests), repeatable (consistent results across executions), traceable (linked to requirements), and maintainable (easy to update). They validate one thing well rather than many things poorly.
Track defect detection rates (percentage of defects found by test cases), requirements coverage (percentage of requirements with test cases), execution frequency (how often tests run), and maintenance burden (time spent updating test cases). Quality test cases find defects, cover requirements comprehensively, execute frequently, and require minimal maintenance.
Test cases progress through states: Draft (under development), Review (awaiting approval), Approved (ready for use), Executed (run with results), Pass/Fail (outcome documented), and Obsolete (no longer relevant). Version control and status tracking manage test case lifecycles effectively.