Inside GitLights’ Developer Performance Model
Modern engineering teams need more than raw activity counts to understand how developers are really contributing. Commit volume alone is not enough, and simple "leaderboards" tend to be noisy, unfair, and easy to misinterpret.
GitLights takes a different approach: it builds a multi-dimensional performance profile for each developer based on how they actually work in GitHub. This profile is surfaced in the Developer Detail dashboard as a polar (radar) chart with six core scores, plus a set of supporting charts and tables that give context around code changes, pull requests, reviews and investment balance.
This article explains how that model is constructed, what the scores mean, and how to use the dashboard responsibly in real teams.
What data does the Developer Detail dashboard use?
Before looking at the scores, it is important to understand the underlying data. The Developer Detail dashboard for a given developer and organization is built from:
- Commits: frequency, lines added, lines deleted, files changed, commit message size.
- Pull requests: number of PRs created, reviews performed, conversations initiated, comments per conversation, time to merge.
- Code changes over time: additions vs deletions and their balance across the selected period.
- Investment categories: commit classification into categories like new development, refactoring, fixes, testing, documentation, CI/CD, and others.
- Filters: all metrics respect the global filters in the dashboard header (date range, repositories, developers, granularity).
All of this comes directly from GitHub data. The Developer Detail view then aggregates and transforms these signals into six scores and several contextual visualizations.
A key design decision: the Developer Detail dashboard always shows the profile of a developer for a specific organization only. Even if the same GitHub user appears in multiple organizations, the chart you see is scoped to the current org.
The six core scores in the radar chart
At the center of the Developer Detail dashboard is a polar chart that summarizes the developer’s profile in six dimensions. These are not arbitrary labels; they are derived from concrete GitHub behaviors and statistics.
1. Feature Implementation Score
This score represents the developer’s ability to deliver new functionality. It is mainly influenced by patterns related to:
- Commits that introduce new code and features.
- Investment categories associated with new development work.
- The volume and impact of changes that create or significantly extend functionality.
In practice, a high Feature Implementation Score suggests the developer drives visible product changes and contributes strongly to feature delivery.
2. Agility Score
The Agility Score reflects speed and efficiency in task delivery. It is informed by temporal and volume-based signals such as:
- How consistently the developer contributes over time.
- How quickly changes move from code to merged pull requests.
- The balance between small, frequent updates and large, infrequent ones.
This dimension answers, in a data-driven way: "How responsive and agile is this developer in their day-to-day work?"
3. Collaborative Contribution Score
This score evaluates how much the developer contributes to others’ work through reviews, comments and discussions. Under the hood it considers, among other signals:
- Pull request reviews performed by the developer.
- Conversations started in PRs.
- Comments left in review threads.
- Investment categories linked to code reviews and collaborative activities.
A high Collaborative Contribution Score is a strong indicator that the developer helps elevate code quality and supports teammates, not just by pushing code but by participating actively in review workflows.
4. Code Improvement Score
The Code Improvement Score highlights a developer’s investment in refactoring and long-term maintainability. It is associated with:
- Commits categorized as refactorings or structural improvements.
- Patterns in lines added vs lines deleted that suggest cleanup and simplification.
- Contributions that reduce technical debt rather than only adding new surface area.
Teams often ask: "Who is taking care of the health of our codebase?" — this score is one of the answers.
5. Technical Contribution Score
This dimension measures contributions to critical technical concerns such as:
- Security-related changes.
- CI/CD and automation improvements.
- Dependency management and infrastructure-level work.
These contributions may not always be visible in product feature lists, but they are essential for reliability and scalability. The Technical Contribution Score makes them explicit in the developer profile.
6. Fixes Score
Finally, the Fixes Score evaluates the developer’s effectiveness in detecting and solving problems. It is driven by:
- Commits associated with bug fixes and maintenance.
- Patterns where code changes correlate with resolving issues or stabilizing behavior.
A strong Fixes Score often correlates with developers who are frequently called to resolve incidents or to unbreak failing flows.
Together, these six scores form a balanced view of contribution that goes far beyond "who committed more lines".
How the scoring algorithm works: percentiles and weighting
The scores in the radar chart are not raw counts. They are the result of a statistical weighting algorithm applied across the GitLights population.
At a high level:
- For each underlying metric, GitLights looks at its distribution across all developers in the platform.
- The 99th percentile of each metric is used as the reference point for a score of 100.
- Values below that percentile are mapped using exponential curves, not linear scaling, to avoid over-emphasizing outliers and to provide a more informative spread among typical developers.
This means that:
- A score of 100 represents performance around the top 1% for that metric.
- The majority of developers will sit in a band below that, with meaningful differences between 40, 60 and 80, instead of everyone clustering at extremes.
Each of the six dimensions in the radar chart is built from a combination of these transformed metrics, so the final scores:
- Are relative to the broader GitLights population, not to a single team.
- Are comparable across organizations, while still displaying data only from the current organization for the selected developer.
From an interpretation standpoint, it is helpful to treat the scores as signal, not verdict: they help you ask better questions about a developer’s role and patterns, rather than providing a simplistic ranking.
Reading the radar chart: shapes, balance and comparison
The Developer Detail dashboard is designed to be readable at a glance.
Some practical guidelines:
- Symmetry vs. asymmetry: a nearly symmetrical hexagon indicates a balanced profile across the six skills. A skewed shape highlights specialization or gaps.
- Top vs bottom of the chart: the upper part (top vertices) tends to emphasize dimensions closer to individual contribution (e.g., Feature Implementation, Agility), while the lower part emphasizes more collective and structural contributions (e.g., Collaborative and Technical Contribution).
- Edge lengths: longer distances from the center mean higher normalized scores for that dimension.
The chart can also display two developers at once, overlaying their profiles with different colors. This supports use cases such as:
- Comparing similar roles (e.g., two senior backend developers) to understand complementary strengths.
- Validating assumptions about who is leading in refactoring or reviews.
- Building mentoring pairs where one developer is strong in the areas where another is weaker.
Because the scores are normalized and visually aligned, differences between developers are easy to spot without needing to read multiple tables of metrics.
Supporting charts: time series and investment context
The radar chart is the core of the model, but the Developer Detail dashboard also includes several complementary widgets that expose the underlying behavior over time.
Commits evolution with EMA and RSI
One widget shows the historical evolution of commits for the developer:
- Bars represent the absolute number of commits in each period.
- A line shows the Exponential Moving Average (EMA) of commits, smoothing short-term noise and revealing trends.
- Another line displays a Relative Strength Index (RSI) adapted to Git activity, capturing the "momentum" of contribution over the last few samples.
This combination helps answer questions like:
- "Has this developer’s contribution been accelerating or slowing down recently?"
- "Are we looking at a temporary spike or a sustained change in behavior?"
Code lines: added, deleted and balance
Another widget focuses on code line dynamics:
- Stacked bars show added and deleted lines and their net balance over time.
- This reveals whether a developer is mostly growing the codebase or performing refactors and cleanups.
Together with the investment categories, this context helps explain why a developer’s Code Improvement or Feature Implementation scores look the way they do.
Developer’s investment balance
A dedicated widget breaks down the developer’s contributions by investment category (e.g., new development, refactoring, fixes, testing, documentation, security, performance optimization, CI/CD, dependency management, code review).
This is presented as a horizontal distribution of percentages, indicating where the developer spends their effort within the development lifecycle.
Instead of guessing from raw commit logs, you can see if someone is heavily oriented toward new implementations, stabilization work, or technical underpinnings like CI/CD and security.
Commit and pull request indicator tables
Two tables consolidate key indicators per developer for:
- Commits: total commits, commits per day, additions per commit, deletions per commit, files changed per commit, message size, and related metrics.
- Pull requests: total PRs, total reviews performed, conversations initiated, reviews per PR, conversations per PR, comments per conversation, and average time to merge.
These tables are fully searchable and clickable, linking back to the detailed view when you need to drill into a specific developer’s behavior.
Typical questions teams ask — and how the model answers them
To clarify how the Developer Detail dashboard can be used in practice, it helps to look at a few common questions.
"How does GitLights measure collaborative contribution?"
Collaborative contribution is not a vague label. In the model, it is supported by concrete signals such as:
- Reviews performed on other people’s pull requests.
- Conversations and comment threads initiated in PRs.
- The structure and density of comments in review discussions.
- Commits and investment categories tied to refactors, CI/CD and shared infrastructure.
By aggregating these dimensions and normalizing them against the broader population, GitLights surfaces developers who amplify others’ work through reviews and shared technical foundations.
"What exactly goes into the Agility Score?"
Agility is fundamentally about how work flows over time. The score draws on:
- The frequency and regularity of commits.
- The relationship between code contribution patterns and pull-request activity.
- Time-based indicators like the evolution of commit EMA/RSI.
The result is not a measure of "hours worked" but of delivery dynamics: how consistently and quickly changes move through the Git lifecycle.
"Can this model be used for performance reviews?"
The Developer Detail dashboard is not a standalone performance review tool. It is best used as:
- An input to structured conversations with developers.
- A way to uncover blind spots (e.g., lack of review activity, over-concentration on one type of work).
- A mechanism to recognize often invisible contributions (refactoring, CI/CD, documentation, security).
The data is descriptive and relative. Interpretation still requires context: role expectations, project assignments, and non-Git contributions are all outside the scope of this model.
Using the Developer Detail model responsibly
Because the Developer Detail dashboard normalizes across a large population and combines multiple dimensions, it can be tempting to reduce everything to a single visual pattern. GitLights is explicitly designed to avoid simplistic rankings.
Some practical guidelines for responsible use:
- Look for patterns, not single points: focus on how the profile evolves over time, not just on today’s shape.
- Consider role and expectations: a platform engineer with a strong Technical Contribution Score and moderate Feature Implementation Score may be exactly what you need.
- Use comparison mode carefully: comparing two developers can surface complementarities or gaps, but it should not replace direct feedback and context.
- Combine with qualitative signals: code quality, stakeholder feedback, incident history and product impact remain critical.
When used in this way, the Developer Detail model becomes a shared language for discussing contribution patterns with developers, grounded in observable Git behavior instead of anecdotes.
Summary
The GitLights Developer Detail dashboard turns raw GitHub data into a structured, multi-dimensional view of each developer’s contribution:
- It uses commits, pull requests, reviews, comments and investment categories as input signals.
- It aggregates them into six normalized scores: Feature Implementation, Agility, Collaborative Contribution, Code Improvement, Technical Contribution and Fixes.
- It applies a percentile-based, weighted algorithm so scores are relative to the broader population rather than arbitrary cutoffs.
- It complements the radar chart with time-series charts, investment breakdowns and indicator tables that expose the underlying behavior.
For engineering leaders, this model provides a transparent, technically grounded lens on how people contribute to the codebase and to the team. For developers, it offers a way to see their own patterns and growth areas through the same lens — with the Git history as the single source of truth.