Blog

Bug Report: Template, Examples, and Best Practices

Rishabh Kumar
Marketing Lead
Published on
April 24, 2026
In this Article:

Discover how to write bug reports that get fixed fast, with templates, real examples, severity vs priority explained, and AI native testing best practices.

A bug report is the single most important communication artifact between a tester and a developer. A well written bug report gets fixed quickly. A poorly written one gets deprioritized, bounced back for more information, or ignored entirely. The difference between the two is not writing skill. It is structure, evidence, and clarity. This guide explains exactly what a bug report is, walks through every field you need to include, provides a ready to use template, and shows how modern AI powered testing platforms are automating the most tedious and error prone parts of bug reporting.

What is a Bug Report

A bug report (also called a defect report) is a documented record of a software defect that describes the problem, the conditions under which it occurs, and the evidence needed for a developer to reproduce and fix it. It is the formal mechanism through which quality assurance teams communicate defects to development teams.

A bug report serves three purposes.

  • It provides developers with everything they need to understand, reproduce, and resolve the defect without additional back and forth.
  • It creates a permanent record that can be tracked through the defect lifecycle from discovery to resolution.
  • It generates organizational data that reveals patterns in defect types, defect density, and quality trends over time.

Bug Report vs Bug Log vs Bug Tracker

These terms are related but distinct.

  • A bug report is an individual document describing a single defect.
  • A bug log is a running record or list of all discovered defects, often maintained in a spreadsheet or database during a testing cycle.
  • A bug tracker is the tool used to manage bug reports throughout their lifecycle, such as Jira, Azure DevOps, TestRail, or Xray.

Why Bug Reports Matter More Than Most Teams Realize

Poor bug reports are one of the most expensive inefficiencies in software development, and one of the least measured. When a developer receives a bug report that lacks reproduction steps, environment details, or visual evidence, they have two options: spend their own time investigating what the tester should have documented, or send the report back for clarification.

Both options waste time. The investigation path means developers are debugging instead of building features. The clarification path means the defect sits unresolved while messages go back and forth between tester and developer. In organizations with time zone differences between QA and development teams, a single clarification request can add 24 hours to defect resolution time.

Industry research suggests that the cost of fixing a defect increases exponentially the later it is discovered in the development lifecycle. A defect caught and clearly documented during testing costs a fraction of what the same defect costs when it reaches production. But the key phrase is "clearly documented." A bug that is found but poorly reported provides little more value than a bug that was not found at all if it cannot be reproduced and fixed.

Essential Components of a Bug Report

Every effective bug report contains these core elements. Missing any one of them increases the probability of the report being deprioritized or returned for additional information.

Components of a bug report

Bug ID

A unique identifier assigned by the bug tracking system. This enables unambiguous reference in communications, code commits, and release notes. Most tracking tools generate this automatically (for example, JIRA 1234 or BUG 5678).

Title / Summary

A concise, descriptive title that tells a developer exactly what the problem is without opening the full report. The title should include the affected feature, the nature of the defect, and the condition under which it occurs.

  • Weak title: "Button doesn't work"
  • Strong title: "Submit button on checkout page unresponsive when shipping address contains special characters"

The strong title tells the developer which button, which page, and which condition. They can begin investigating before even reading the full report.

Description

A clear explanation of what the defect is and why it matters. The description should explain the business impact:

  • Does this defect prevent a user from completing a transaction?
  • Does it produce incorrect calculations?
  • Does it display wrong information?

Context about the user impact helps product managers and development leads prioritize correctly.

Environment

The specific configuration where the defect was observed. This includes the operating system and version, browser and version, device type (desktop, tablet, mobile), application version or build number, and the test environment (staging, QA, pre production).

Environment details are critical because many defects are environment specific. A bug that only appears on Safari 17 on macOS Sonoma will never be reproduced if the developer is testing on Chrome on Windows.

Steps to Reproduce

The sequential actions a developer must take to observe the defect. Steps to reproduce are the most important section of any bug report. They must be specific enough that anyone can follow them and see the same result.

Weak steps: "Go to checkout and try to pay. It doesn't work."

Strong steps:

  1. Navigate to the product catalog page.
  2. Add any product to the cart.
  3. Click "Proceed to Checkout."
  4. Enter a shipping address with an ampersand (&) in the street field (for example, "123 Smith & Sons Blvd").
  5. Select "Credit Card" as the payment method.
  6. Enter valid test card details.
  7. Click "Submit Order."
  8. Observe that the Submit button becomes unresponsive and no order confirmation appears.

Expected Result

What should happen when the steps are followed correctly. This establishes the baseline against which the defect is measured. "The order should be submitted successfully and the user should see an order confirmation page with an order number."

Actual Result

What actually happens. "The Submit button becomes unresponsive. No order confirmation is displayed. The browser console shows a JavaScript error: 'Uncaught TypeError: Cannot read property sanitize of undefined.'"

Severity

The technical impact of the defect on system functionality.

Common severity levels include:

  • Critical: The system crashes, data is lost, or a core function is completely broken with no workaround.
  • High: A major feature is broken but the system does not crash. There may be a workaround but it is not acceptable for production.
  • Medium: A feature does not work as expected but there is a reasonable workaround available.
  • Low: A minor issue such as a cosmetic defect, typo, or non functional imperfection that does not affect usability.

Priority

The business urgency of fixing the defect. Priority is determined by product management or the development lead based on business impact, customer visibility, and release timeline. A low severity cosmetic defect on the homepage might be high priority because every customer sees it. A high severity crash in an admin tool used by two people might be medium priority.

Attachments and Evidence

Visual evidence transforms a bug report from a description into proof. Attachments should include screenshots showing the defect state, video recordings of the reproduction steps, browser console logs, network request/response data, and application log excerpts.

The more evidence a bug report includes, the faster a developer can diagnose the root cause without needing to reproduce the defect first.

Sample Bug Report Template

Sample Bug Report Template
A sample bug report format designed to capture every detail needed for fast, consistent defect triage. Every bracketed field is a prompt to fill in with real data.

Common Bug Report Examples

Example 1: Functional Defect

Title: Cart total displays incorrect amount when removing item with active coupon

Environment: Chrome 120 on Windows 11, Staging environment, Build 2.8.4

Severity: High

Priority: P1

Description: When a customer applies a percentage discount coupon and then removes one of the items from their cart, the discount is recalculated incorrectly. The total shows the discounted price of the removed item subtracted twice, resulting in a lower total than correct. This could result in revenue loss if the order processes at the incorrect amount.

Steps to Reproduce:

  1. Add Product A ($50) and Product B ($30) to the cart.
  2. Apply coupon code "SAVE20" (20% discount).
  3. Verify cart total shows $64 (correct: $80 minus 20%).
  4. Remove Product B from the cart.
  5. Observe that the cart total shows $34 instead of the correct $40 (Product A at $50 minus 20%).

Expected Result: Cart total should recalculate to $40 ($50 minus 20% discount).

Actual Result: Cart total displays $34. The 20% discount on Product B ($6) appears to be subtracted an additional time.

Example 2: UI/Visual Defect

Title: Login form password field overlaps "Forgot Password" link on mobile Safari

Environment: Safari 17.2 on iOS 17.2, iPhone 15 Pro, Production environment

Severity: Medium

Priority: P2

Description: On mobile Safari, the password input field extends beyond its container and overlaps the "Forgot Password" link below it. Users cannot tap the "Forgot Password" link without first tapping into the password field. This creates friction for users attempting to recover their credentials.

Steps to Reproduce:

  1. Open the login page on Safari on an iPhone 15 Pro (iOS 17.2).
  2. Tap the password field to activate the keyboard.
  3. Observe that the password field visually overlaps the "Forgot Password" link.
  4. Attempt to tap "Forgot Password" while the keyboard is active.

Expected Result: The password field should remain within its container bounds and the "Forgot Password" link should remain accessible.

Actual Result: The password field overlaps the link. Tapping the overlapping area activates the password field instead of the link.

Example 3: API/Integration Defect

Title: Order API returns 200 OK but fails to create order record when inventory is zero

Environment: QA environment, API v2.4, Build 3.1.0

Severity: Critical

Priority: P1

Description: The order creation API endpoint returns a 200 OK response with an order confirmation number even when the requested product has zero inventory. No order record is created in the database, but the customer receives a confirmation email with the returned order number. This results in phantom orders that appear confirmed to customers but do not exist in the system.

Steps to Reproduce:

  1. Set Product ID 4521 inventory to 0 in the admin panel.
  2. Send POST request to /api/v2/orders with Product ID 4521, quantity 1.
  3. Observe 200 OK response with order_id in the response body.
  4. Query the orders database for the returned order_id.
  5. Observe that no record exists for that order_id.

Expected Result: API should return 400 or 422 with an error message indicating insufficient inventory.

Actual Result: API returns 200 OK with a generated order_id. No database record is created. Confirmation email is triggered.

CTA Banner

Severity Vs Priority - How Are They Different?

Severity and priority are the two most commonly confused fields in a bug report. They measure different things and are owned by different people. Severity is a technical assessment made by QA. Priority is a business decision made by product management or the development lead.

Severity Vs Priority - Table

A critical severity defect is not automatically a P1 priority. A low severity defect is not automatically low priority. The two dimensions are independent. A well-written bug report always includes both, with a brief rationale for the priority assignment when it might not be obvious from the severity alone.

The Bug Lifecycle: From Discovery to Closure

Understanding the bug lifecycle ensures that reports move efficiently through the resolution process rather than stalling in queues or bouncing between teams.

New

The defect has been reported and entered into the tracking system. It has not yet been reviewed by a developer or development lead.

Open / Assigned

The report has been reviewed, accepted as a valid defect, and assigned to a developer for investigation. If the report is rejected (not reproducible, duplicate, or by design), it moves to a Rejected or Closed status with an explanation.

In Progress

The assigned developer is actively working on a fix. This status indicates that the defect has been reproduced and a solution is being developed.

Fixed

The developer has implemented a fix and deployed it to a test environment. The fix is now ready for verification by the QA team.

Verified

The QA team has confirmed that the fix resolves the defect without introducing new issues. The verification should follow the original reproduction steps and also check for regression in related functionality.

Closed

The defect is resolved and verified. The bug report is closed and archived. If the defect reappears later, a new bug report should reference the original closed report for context.

Reopened

A previously closed defect has been observed again, either because the fix was incomplete or because a subsequent change reintroduced the issue. Reopened defects typically receive elevated priority.

Common Bug Report Mistakes and How to Avoid Them

1. Vague Titles

Titles like "page broken" or "error on click" force developers to open the full report before they can even assess relevance. Write titles that describe the defect specifically enough to be useful in a list view.

2. Missing Reproduction Steps

A bug report without reproduction steps is a feature request for investigation. Developers cannot fix what they cannot observe. If you can reproduce the defect, document every step. If you cannot reliably reproduce it, document the conditions under which you observed it and note that reproduction is intermittent.

3. Combining Multiple Defects in One Report

Each bug report should describe exactly one defect. Combining multiple issues in a single report creates confusion about when the report can be closed, makes tracking and metrics inaccurate, and complicates developer assignment when different defects require different expertise.

4. Missing Environment Details

"It works on my machine" is the most predictable response to a bug report without environment information. Always include the exact browser, OS, device, and application version. When testing across configurations, note which configurations exhibit the defect and which do not.

5. No Visual Evidence

A screenshot or video recording is worth more than a paragraph of description. Modern testing workflows should include visual evidence as a standard component of every bug report, not an optional addition.

Best Practices for Writing Effective Bug Reports

1. Write for the Reader, Not the Writer

The audience for a bug report is the developer who will fix the defect. Write with their needs in mind. They need to understand the problem, reproduce it, and verify the fix. Everything in the report should serve one of those three purposes.

2. Be Specific About Frequency

Is the defect consistent (happens every time the steps are followed) or intermittent (happens sometimes under certain conditions)? Intermittent bugs require more detail about the conditions under which they were observed and any patterns noticed (for example, "occurred 3 out of 10 attempts, always after the session exceeded 20 minutes").

3. Include What You Tried

If you attempted workarounds or variations, document them. "The defect occurs with Chrome 120 but not Firefox 121" immediately narrows the investigation scope. "The defect occurs with Product A and Product B but not Product C" suggests the issue may be product data related rather than code related.

4. Separate Observation from Interpretation

Report what you observed, not what you think the cause is. "The cart total shows $34 instead of $40" is an observation. "I think the discount calculation is wrong" is an interpretation. Include observations in the Actual Result section and interpretations in the Additional Notes section if they may be helpful.

5. Use Consistent Terminology

Refer to UI elements, features, and workflows using the same names that appear in the application and in internal documentation. Inconsistent terminology ("the pay button" vs "the submit order button" vs "the checkout CTA") creates confusion about which element is affected.

How AI is Transforming Bug Reporting

The most time consuming aspects of bug reporting are evidence collection, environment documentation, and root cause hypothesis. These are precisely the areas where AI native testing platforms deliver the most value.

Automated Evidence Collection

AI native test automation platforms automatically capture screenshots at every test step, record network requests and responses, log DOM state changes, and preserve browser console output. When a test fails, all of this evidence is packaged into the test report without the tester manually gathering it.

This eliminates the most common bug report failure: insufficient evidence. Every automated test failure comes with a complete evidence package that developers can use to begin diagnosis immediately.

AI Root Cause Analysis

Traditional bug reports describe symptoms. AI powered root cause analysis identifies causes. When a test fails, AI models analyze the test steps, network events, failure reasons, error codes, and DOM changes to provide an automated diagnosis of why the failure occurred.

This capability has been shown to reduce defect resolution time by 75%. Instead of a developer spending hours reproducing the defect and tracing through code to find the source, the AI analysis points them directly to the failure point with supporting evidence.

Intelligent Test Reports with Step by Step Evidence

Comprehensive test reports generated by AI native platforms include step by step execution evidence in PDF and Excel/CSV formats. Each step shows what was attempted, what was observed, and whether the outcome matched expectations. When integrated with tools like Jira, Xray, or TestRail, failed test steps automatically generate bug reports pre populated with reproduction steps, environment details, and visual evidence.

This integration transforms bug reporting from a manual documentation exercise into an automated byproduct of test execution. Testers spend their time analyzing defects rather than describing them.

Journey Summaries

AI powered journey summaries analyze the full test execution, including all steps, checkpoints, and outcomes, and produce a human readable narrative summary. This gives developers and product managers a quick understanding of what was tested and where things went wrong without reading through every individual test step.

How Virtuoso QA Changes the Bug Reporting Workflow

The most frustrating part of bug reporting is that the evidence developers need most is the evidence testers are least likely to capture in the moment. By the time the report is written, the session has ended and the proof is gone.

Virtuoso QA solves this by treating evidence collection as an automatic output of test execution.

Complete evidence captured automatically

Every test failure produces screenshots at each step, full network request and response logs, DOM snapshots at the point of failure, and console errors, all packaged without any manual effort from the tester.

AI Root Cause Analysis

Rather than describing what went wrong, Virtuoso QA's AI explains why. It correlates failure evidence across UI behaviour, API responses, and network events to produce a diagnosis that points developers directly to the cause. Defect resolution time drops by approximately 75 percent.

Native integration with your defect tracking tools

Virtuoso QA integrates with Jira, Xray, and TestRail. Failed tests generate pre-populated bug reports with reproduction steps, environment details, and visual evidence already attached. Testers review and submit rather than author from scratch.

AI Journey Summaries

Plain-English summaries of what was tested, what passed, and what failed, giving product managers and development leads the context they need for prioritisation decisions without reading through execution logs.

CTA Banner

Frequently Asked Questions

What should a bug report include?
Essential components include a unique ID, descriptive title, environment details (OS, browser, device, app version), steps to reproduce, expected result, actual result, severity, priority, visual evidence (screenshots or video), and any additional context such as related defects or workarounds.
How do you write steps to reproduce a bug?
Write sequential, specific actions that anyone can follow to observe the defect. Number each step. Include exact values, specific pages, and precise interactions. A developer should be able to follow your steps verbatim and see the same defect without guessing or interpreting.
What is the difference between severity and priority in a bug report?
Severity measures the technical impact of the defect on system functionality (how broken is it?). Priority measures the business urgency of fixing the defect (how soon does it need to be fixed?). A critical severity defect in a rarely used admin tool might be medium priority, while a low severity cosmetic defect on the homepage might be high priority.
How do you prioritize bug reports?
Prioritization considers business impact (how many users are affected), severity (how broken is the functionality), workaround availability, release timeline proximity, and customer visibility. Critical defects in customer facing flows with no workaround receive the highest priority regardless of other factors.
Should testers suggest the fix in a bug report?
Testers should not prescribe the technical fix, as that is the developer's domain. However, testers should include any observations that might help diagnosis: patterns noticed, conditions that trigger or prevent the defect, related functionality that works correctly, and any error messages observed. These observations accelerate root cause analysis.

How many bugs should a tester report per day?

There is no meaningful benchmark for daily bug count. Quality of bug reports matters far more than quantity. A tester who files 5 well documented, reproducible defects with complete evidence delivers more value than one who files 20 incomplete reports that require developer follow up. Focus on defect detection effectiveness rather than report volume.

Subscribe to our Newsletter

Codeless Test Automation

Try Virtuoso QA in Action

See how Virtuoso QA transforms plain English into fully executable tests within seconds.

Try Interactive Demo
Schedule a Demo