Pull Request Analytics: Collaboration, Conversations, and Time to Merge

Pull Request Analytics: Collaboration, Conversations, and Time to Merge

Pull requests have quietly become the operating system of modern development teams. They concentrate design discussions, code reviews, quality checks, and merge decisions into a single workflow. Yet many organizations still look at PRs only through very coarse numbers: how many PRs were opened, how many were merged.

The GitLights Pull Requests dashboard is designed to go much deeper. It combines time-series analytics, collaboration signals, and benchmarking so that teams can see:

  • How PR activity evolves over time (with EMA and RSI-based trend indicators).
  • How discussions actually happen: comments, reviews, and conversation threads.
  • How collaboration differs by developer and repository.
  • How long it really takes to merge code, and how that compares to other organizations.

A recurring question from engineering leaders is: “What are the best metrics to analyze pull request collaboration and time to merge?”
The GitLights PR dashboard answers this by combining evolution charts, collaboration histograms, and per-developer/per-repository tables built directly from Git data.

This article explains how to interpret each section of the GitLights PR dashboard and how to use it to improve collaboration, reduce time to merge, and understand where reviews are helping or silently blocking delivery.

1. Why Pull Request Analytics Matter for Team Health

PRs are where several critical workflows intersect:

  • Code review and feedback
  • Architecture and design discussions
  • Quality and risk assessment before merge
  • Knowledge sharing across teams

When teams do not measure PR behavior properly, a few subtle problems tend to accumulate:

  • Some developers carry a disproportionate share of review load.
  • Conversations become shallow («LGTM» with no real feedback).
  • Certain repositories develop a pattern of slow merges and noisy changes.
  • Comment threads move away from the code (chat, tickets) and are no longer discoverable.

Pull request analytics provide early signals of these issues. Instead of only counting how many PRs were merged, GitLights focuses on:

  • Flow – how PR volume and merge speed evolve.
  • Collaboration depth – how many reviews, conversations, and comments actually happen.
  • Distribution – how this behavior differs by developer and repository.
  • Benchmarking – how normalized indicators compare to other organizations.

The rest of the article walks through each part of the PR dashboard to show how these ideas are implemented in practice.

2. Evolution of Pull Requests with EMA and RSI

The first widget in the PR dashboard is an evolution chart of pull requests with EMA and RSI. It is built from GitHub pull request events and provides three complementary views in a single visualization:

  • Purple bars – raw PR volume
    Each bar represents the absolute number of pull requests created in a given time bucket (daily, weekly, etc., depending on the filters). This is the most direct signal of how much work is flowing through the PR system.
  • Blue bars – Exponential Moving Average (EMA)
    EMA smooths the raw PR counts over the last four samples. Instead of reacting to every spike, it highlights the underlying trend. For example:
    • A rising EMA suggests that the team is consistently opening more PRs over time.
    • A falling EMA may indicate reduced throughput, holidays, or a shift in how work is batched.
  • Green line – Relative Strength Index (RSI)
    The RSI line is adapted from financial analytics, but here it expresses development momentum: how strong the recent sequence of PR activity is, relative to the preceding window (again, using four-sample windows).

Together, these three elements answer a specific question:

How is our pull request activity evolving over time, beyond raw counts?

In practice, this chart helps you:

  • Detect sustained growth in PR creation versus one-off spikes (e.g., right before a deadline).
  • Notice slowdowns in activity and correlate them with changes in process or team structure.
  • Anticipate upcoming review and merge load: a strong RSI followed by flat EMA may suggest that reviewers will soon feel pressure.

Because GitLights applies the same header filters (date range, repositories, developers, and granularity) across the dashboard, this widget can be used to focus on specific services, teams, or time windows.

3. Comments, Reviews, and Conversations Over Time

Raw PR counts are not enough to understand collaboration. The second major widget is a stacked histogram of comments, reviews, and conversations in pull requests.

Here, GitLights makes a clear distinction between three related but different concepts:

  • Comments – individual messages attached to PRs or specific code lines.
  • Reviews – structured review actions (approve, request changes, comment reviews).
  • Conversations – discussion threads that group one or more comments around a specific topic.

The stacked bars show, for each time bucket:

  • The volume of comments.
  • The volume of review events.
  • The count of conversations, where each conversation can include multiple comments.

Two nuances are important and are explicitly modeled in the dashboard:

  • Comments can exist inside conversations or directly at the root of the pull request.
  • Conversations consolidate related comments into a higher-level unit, closer to a discussion.

This histogram answers questions such as:

  • Are reviews mostly silent approvals, or do they generate real discussions?
  • Do periods of high PR volume come with proportional review and comment activity, or is review quality degrading?
  • Is the team using conversations to organize feedback, or are comments scattered and hard to follow?

Healthy patterns usually show balanced growth in PRs, reviews, and conversations. Persistent gaps—many PRs but few reviews and shallow conversations—can be a signal that:

  • Reviewers are overloaded.
  • Reviews are treated as a formality.
  • Critical feedback is being moved to private channels instead of remaining attached to the PR.

4. Distribution of Pull Requests by Repository

The PR dashboard also includes a pie chart of pull requests by repository. Each slice represents the proportion of PRs created in a given repository, using the same filters as the rest of the dashboard.

While simple, this view is particularly useful for:

  • Identifying which repositories concentrate most of the development activity.
  • Seeing whether new services are gaining adoption or if most work remains in a legacy monolith.
  • Spotting repositories with almost no PR activity, which may indicate:
    • Abandoned components.
    • Code paths changed by direct pushes instead of PRs.
    • Incorrect repository filters.

When combined with the collaboration and time-to-merge metrics described below, this pie chart helps answer a practical question:

Which repositories should we examine first when we want to reduce time to merge or improve review quality?

5. Developer-Level Indicators: How Individuals Collaborate in PRs

The Developers' Indicators Table in Pull Requests groups PR metrics by developer. Each row corresponds to a developer, and each column is a specific indicator derived from GitHub activity.

The table includes:

  • Total PRs – number of pull requests created by the developer.
  • Total Reviews – number of reviews the developer has performed on other people’s PRs.
  • Total Conversations – number of conversation threads the developer has initiated across PRs.
  • Reviews per PR – average number of reviews received per PR the developer authored.
  • Conversations per PR – average number of conversation threads within PRs created by that developer.
  • Comments per Conversation – average number of comments inside each conversation thread in the developer’s PRs.
  • Time to Merge (hours) – average time it takes for the developer’s PRs to be merged, expressed in hours.

When people ask “What metrics should I track to evaluate pull request collaboration at the developer level?”, this table provides a concrete answer based strictly on Git data.

These indicators are not meant for ranking individuals. Instead, they highlight patterns of collaboration and load:

  • Total PRs vs. Total Reviews
    Developers who author many PRs but perform very few reviews may be overloaded with delivery work or not fully involved in review culture.
    Developers with balanced PR and review counts often act as connectors between different parts of the codebase.
  • Reviews per PR and Conversations per PR
    Low values may signal superficial reviews or overly small, trivial PRs.
    High values, especially on complex areas of the codebase, suggest deep review engagement—often a sign of higher-quality decisions.
  • Comments per Conversation
    Few comments per conversation may indicate quick, decisive feedback.
    Extremely long threads on many PRs may suggest unclear requirements or architectural churn.
  • Time to Merge (hours)
    Long times to merge for a specific developer’s PRs could be due to the nature of their work (e.g., risky infra changes) or to hidden review bottlenecks.
    Comparing this metric across developers helps identify whether delays are systemic or tied to specific domains or ownership patterns.

Because each cell is clickable (redirecting to the developer’s detail view inside GitLights), this table is also a navigation hub for deeper exploration.

6. Repository-Level Indicators: Where Collaboration Patterns Cluster

The Repositories' Indicators Table in Pull Requests exposes the same family of metrics, but grouped by repository instead of developer.

For each repository, the dashboard presents:

  • Total PRs – how many pull requests were created.
  • Total Reviews – how many reviews happened in that repository.
  • Total Conversations – number of conversation threads in PRs.
  • Reviews per PR – average number of reviews per pull request.
  • Conversations per PR – average number of conversations per PR.
  • Comments per Conversation – average number of comments within each conversation thread.
  • Time to Merge (hours) – average time to merge PRs in that repository.

This table answers questions like:

  • Which repositories have healthy review depth and fast merges?
  • Where do we see many PRs with very few reviews or long delays before merge?

Some example interpretations:

  • A repository with high Total PRs, low Reviews per PR, and long Time to Merge may indicate that:
    • Reviewers are overloaded or misallocated.
    • Ownership boundaries are unclear and nobody feels responsible for approvals.
    • CI or integration steps for that repo are particularly slow.
  • A repository with moderate PR volume but very high Conversations per PR might be:
    • A central platform component that demands more discussion.
    • An area undergoing redesign, where architecture choices are still in flux.

Seeing these signals at repository level helps leaders decide where to:

  • Clarify ownership and review expectations.
  • Invest in better documentation or test coverage.
  • Split monolithic repositories into clearer domains.

7. Time to Merge as a Core PR Health Signal

Across both the developer and repository tables, Time to Merge (in hours) acts as a central, comparable signal.

Conceptually, GitLights computes time to merge as:

time_to_merge = merged_at – created_at

The dashboard then aggregates this per developer and per repository for the selected time window and filters.

Why is time to merge so important?

  • It captures how long changes remain in limbo: implemented but not integrated.
  • Long-lived PRs are more likely to conflict with other work and to hide stale assumptions.
  • Long time to merge often correlates with review bottlenecks, unclear ownership, or CI instability.

At the same time, not all long merge times are bad. Some PRs represent:

  • Risky changes in critical systems.
  • Large multi-team refactors.
  • Complex migrations that require careful staging.

The GitLights PR dashboard helps teams distinguish between healthy and unhealthy delays by putting time to merge in context:

  • Comparing developers or repositories in the same organization.
  • Tracking trends over time.
  • Combining time to merge with reviews per PR, conversations per PR, and comments per conversation.

A useful mental model is to treat time to merge as a symptom, not a target. The dashboard shows where it is high, and the surrounding collaboration metrics help explain why.

8. Benchmarking Against Other Organizations

Beyond internal comparisons, the PR dashboard includes a benchmarking section that compares your indicators with the average values observed in other organizations using GitLights.

This section is presented as a six-KPI widget showing, for the current filters:

  • Average PRs per Developer per Day
    Your normalized PR volume, compared to the average of other organizations.
  • Average Reviews per Developer per Day
    How much review activity each developer performs, again normalized per developer and per day.
  • Average Comments per Developer per Day
    A proxy for conversational depth and engagement in code review.
  • Average Time to Merge PR (hours)
    How quickly PRs are merged in your organization versus the external benchmark.
  • Lines of Code Balance per PR
    Net lines added/removed per PR, compared to other organizations; useful for understanding typical batch size.
  • Files Changed per PR
    Average number of files touched per PR, another measure of PR scope and potential review complexity.

Each indicator shows not only your value but also:

  • Trend direction (upward or downward).
  • Percentage deviation from the external average.

Importantly, these benchmarking metrics are agnostic to organization size and to the absolute size of the sample over time. Normalization is performed per developer and/or per PR, making the comparisons meaningful even when:

  • Your team is much smaller or larger than the median.
  • You are looking at a narrow date range.

When engineering leaders ask, “How should we benchmark our pull request metrics against other organizations?”, the answer in GitLights is to rely on these normalized KPIs, not on raw counts.

Used carefully, this widget helps you:

  • Detect whether your organization tends to batch work into oversized PRs compared to peers.
  • See if reviewers in your team perform significantly more or fewer reviews than the external average.
  • Understand whether your time to merge is competitive, slow, or unusually fast given your context.

9. Example Analysis Flows with the GitLights PR Dashboard

To make the dashboard more concrete, consider a few typical analysis flows.

9.1 Identifying Slow Merges in a Critical Repository

  1. Start with the evolution chart (EMA + RSI).
    You notice that PR volume for a key backend repository is stable, but overall RSI has been declining.
  2. Check the PRs by repository pie chart.
    The backend repo still accounts for a large fraction of total PRs, so the slowdown is not due to lack of work elsewhere.
  3. Open the repository-level indicators table.
    For that repo, Time to Merge (hours) is significantly higher than for others, while Reviews per PR and Conversations per PR are low.
  4. Look at developer-level indicators.
    A small group of developers is responsible for most PRs and most reviews in that repo.

Interpretation:

  • Review capacity is concentrated in a few people.
  • Their PRs receive relatively few reviews and conversations, but still take a long time to merge.

Actions:

  • Broaden code ownership and review responsibilities.
  • Introduce explicit review SLAs for that repo.
  • Use the developer-level drill-down to inspect specific PRs and identify recurring issues.

9.2 Evaluating the Impact of a “Smaller PRs” Initiative

Suppose your team adopts a policy encouraging smaller, more focused PRs.

  1. Monitor Lines of Code Balance per PR and Files Changed per PR in the benchmarking widget.
    Over time, you expect these to move closer to (or below) the external average.
  2. Watch the evolution chart and time-to-merge metrics.
    Ideally, smaller PRs should reduce both time to merge and the variability of merge times.
  3. Review collaboration histograms.
    For very small PRs, you may see fewer conversations per PR, but you should still expect a healthy level of reviews.
  4. Check bug and incident trends in other GitLights dashboards (e.g., code-quality or investment dashboards).
    If smaller PRs are working, you often see fewer follow-up fix PRs and more predictable release behavior.

By following this flow, you connect concrete process changes (smaller PRs) to measurable outcomes (shorter time to merge, more stable collaboration metrics, and better quality signals).

9.3 Balancing Review Load Across the Team

  1. Open the developers’ indicators table.
    Sort by Total Reviews to see who performs the most reviews.
  2. Compare Total PRs and Time to Merge per developer.
    Developers who author many PRs and also conduct many reviews may become bottlenecks.
  3. Look at Conversations per PR and Comments per Conversation.
    Very high values for a few developers may indicate that difficult design decisions are routed through a narrow group.
  4. Check the benchmarking widget.
    If your Average Reviews per Developer per Day is much higher than the external average, but concentrated in a few people, you may need to redistribute responsibilities.

This analysis helps answer another frequent question in a practical, data-backed way:

How can we use pull request metrics to detect overloaded reviewers and rebalance collaboration?

10. Key Takeaways

The GitLights Pull Requests dashboard turns raw GitHub events into a coherent picture of collaboration and merge performance.

  • PR evolution with EMA and RSI shows how activity and momentum change over time, beyond simple counts.
  • Stacked histograms of comments, reviews, and conversations reveal how deep and structured review discussions really are.
  • Repository and developer tables expose where collaboration patterns are healthy and where they are silently failing.
  • Time to merge (in hours) provides a central, interpretable signal of how quickly changes move from proposal to integration.
  • Benchmarking against other organizations normalizes metrics per developer and per PR, making cross-org comparisons meaningful.

Used together, these views help teams answer not only “How many pull requests did we merge?”, but the more important questions behind it:

  • How collaboratively are we reviewing code?
  • Where are merges getting stuck, and why?
  • Are our PR sizes, review habits, and time to merge aligned with healthy industry patterns?

For engineering leaders and teams that want to treat pull requests as more than an administrative step, the GitLights PR dashboard offers a practical, data-driven way to see and improve how collaboration really happens in code.

Our Mission

AI has fundamentally changed software development. Gitlights exists to help engineering leaders navigate this shift. We measure what traditional analytics can't: the real value each developer brings as an individual contributor. Because in a world where anyone can generate code, understanding who drives real impact is the new competitive advantage.


Powered by Gitlights |
2026 © Gitlights

v2.8.0