This feature is currently in beta.
Lokalise Analytics delivers precise insights that help you make informed decisions, turning translation uncertainties into clear choices. This allows you to save time and better allocate your resources.
With Analytics, you can:
Measure translated word volume and completion rates by language.
Gain insights into different translation methods, including human translations, AI-driven translations, and those using translation memory.
Accessing Lokalise Analytics
To get started with Lokalise Analytics, click the icon in the left-hand menu:
Keep in mind that you'll only see data for the team you've currently selected. To switch teams, click on your avatar in the bottom left corner and choose a different team from the menu.
Volume dashboard
Accessing the Volume dashboard
The Volume dashboard helps you predict future costs and resource needs, allowing you to optimize your business operations. Here, you’ll find an overview of your past data, including the volume of words translated.
To access the Volume dashboard, switch to the corresponding tab on the Analytics page:
Filtering data
You can filter your data using Date and Target language filters.
Customers on the Enterprise plan have access to additional filters, allowing you to filter by project, project tags, or key tags.
For example, you can filter by key tag to get a detailed view of translation volumes for specific content types, such as software strings, customer support guides, or marketing materials like announcement emails. Alternatively, use project tags to monitor the translation volume across different products within your organization’s portfolio.
Forecasting resource requirements and predicting future costs
Using the Volume dashboard, you can predict future costs and resource requirements to optimize business operations.
Translation methods overview
The Volume overview table displays the translation methods used for each language, based on the selected date range in your filters.
Columns:
Target language — click a language name to filter by it. If multiple projects use the same language name but different IDs, they appear as one language.
Words — this counts the words translated from the base language into the target language. It does not reflect the total number of words in the target language.
Translation methods — these columns show how many words were translated using each method.
Base words added by month
The Base words added by month chart shows:
The number of words added or updated in base language keys.
How many of those were translated into target languages.
Chart details:
Dark blue — words added in the base language.
Light blue — cumulative words added, including modifications of existing base language content.
The word count is cumulative, even if your team uses multiple base languages.
Understanding base word iterations
To make this clear, let’s break it down with an example:
Adding a new key: You create a translation key
welcome
with the base value"Hello"
(English).This counts as one new base word.
Translating a new key: You translate this new key for the first time in Latvian as
"Sveiki"
.This counts as one new translated word.
Updating a target translation only: Later, you decide to update the Latvian translation from
"Sveiki"
to"Labdien"
but make no changes to the base value"Hello"
.This counts as an edit.
Analytics will show edit rate of 100% and edit distance of 100% as translation was was fully rewritten.
Modifying base language copy in the existing key: Suppose you change the base English value from “Hello” to “Hey”.
This counts as one modified base word.
Editing translation after modified base language copy: Later, you update the Latvian translation to
"Cau"
so translation now matches the new base language copy.This counts as one new translated word.
Base words translated per month
The Base words translated per month chart breaks down translation methods for each target language.
Compare translation methods
These graphs help you evaluate and choose the most effective translation method based on performance. They answer the question, "How can I determine the best translation method for my needs?"
For example, let’s compare translations done with Lokalise AI to those done with traditional machine translation engines.
Edit rate by translation method
The key metric for assessing translation quality is the edit rate. The Edit rate by translation method chart shows the monthly edit rates for various translation methods. It indicates how many keys were edited by a reviewer—the fewer keys edited, the better the initial translation quality.
In this example, only about 6% of translations done by Lokalise AI were edited in November:
In comparison, 20% of traditional machine translations were edited in September, before AI was actively used:
This suggests that Lokalise AI produces higher quality translations and is more efficient than traditional machine translation, guiding businesses in selecting the best translation method.
The edit rate is calculated as the ratio of meaningfully changed translations to the total number of keys translated using a specific method.
Edits are attributed to the date of the original translation, not the date of the edit.
Translation methods | Meangiful edits | Edit rate |
Key translated by MT #1 |
| 2 / 5 = 40% |
Key translated by API #1 |
| 1 / 3 = 33% |
Key translated by Humans #1 | EDIT #1 | 1 / 1 = 100% |
Words edited by translation method
The secondary metric to consider is editing distance, which measures how much a translation was changed by the reviewer. This could be a minor change, like altering a single letter, or a major one, like rewriting the entire translation.
To view this metric, use the Words edited by translation method chart, which tracks the edit distance for words by translation method over time. This metric complements the edit rate: while the edit rate shows how many keys were edited, the average edit distance reveals how extensively the keys were modified.
More changes indicate that the translation was less accurate, resulting in a higher editing distance. In the example provided, 34% of the characters in translations done by Lokalise AI were rewritten:
A similar percentage is seen with traditional machine translations:
This suggests that, in this example, when a translation needed editing, the effort required was comparable for both methods.
Edits are attributed to the date of the original translation, not the date when they were edited.
If more than 500 characters are changed in a translation, or if a translation longer than 2000 characters is modified, analytics will show 100% of words edited. This is because it reflects a significant effort from the reviewer.
Base language | Translation | Edit | Words edited |
Hello | Hallo |
| non edited keys are ignored |
Hello
| Hallo | Guten tag! | 100% (fully rewritten) Total: 83% (average of edited) |
Hello | Hallo | Hallo, Frau! | 58% Total: 58% (average of edited) |
Translation method breakdown
Each translation method has its own section that displays several metrics and their changes over time. This allows you to compare different translation methods and find the right balance between translation speed, cost, and quality.
The metrics included are:
Unedited keys over the selected period
Edited keys over the selected period
Edit rate over the selected period
Average edit distance over the selected period
From the example above, we can see that translations provided by Lokalise AI required significantly fewer edits, although the effort involved in making those edits (the edit distance) was similar to that for machine translations.
Tasks dashboard
Accessing the Tasks dashboard
The Tasks dashboard allows you to track and analyze the time spent on various tasks, helping you improve productivity and efficiency.
Task metrics are presented using Trends visualization, which displays current metrics alongside comparisons to a previous period.
Filtering data
Please note that only Enterprise plan customers have access to task, task type, project tag, and contributor filters.
Within the Tasks dashboard, you can apply the following filters:
Date — filters by the date range when tasks were created.
Target language — filters tasks involving specific languages and calculates "words" only for those target languages.
Task — displays only the selected tasks.
Task type — filters by task types; AI types are excluded by default.
Project — filters tasks from selected projects.
Project ID — filters tasks from selected project IDs.
Contributor — includes tasks assigned to a specific contributor and counts words only from languages to which that contributor is assigned.
Project tag — includes tasks from all projects with the selected tags.
Key tags — shows tasks containing only keys with specified tags.
Basic tasks metrics
There are a few basic metrics under the Tasks tab:
Tasks created — number of tasks created within a period.
Tasks closed — number of tasks closed within a period.
Overdue active tasks — tasks created within the date range that are overdue and not yet completed.
Overdue completed tasks — tasks completed within the date range that were overdue at the time of completion.
Track and analyze the time spent on tasks
Tasks time overview
For human tasks, time is measured in days and hours. However, for AI-powered tasks, time is measured in hours and seconds, as these tasks are typically much faster.
The Tasks time block provides details on the duration from task creation to completion, excluding unfinished tasks. This tool helps you monitor and analyze the time spent on tasks, enabling more accurate future predictions.
It helps you answer the question, "How can I track and analyze the time spent on various tasks to improve productivity and efficiency?"
The displayed data includes:
Average — the arithmetic average time to complete a task.
Median — the median time it takes to complete tasks.
80th percentile — the time under which 80% of tasks are completed, reflecting the Pareto ratio.
Longest time — the maximum time taken to complete a task.
Average words per day — the number of words from target languages divided by the hours it took to complete a language, averaged out.
Waiting time — the duration from when the task was created to when the first key was marked as done.
Note on the 80th percentile
We recommend using the 80th percentile instead of average time when analyzing how long it takes to complete translations. This metric shows the time needed to complete 80% of tasks, providing a more reliable measure that accounts for past data while ignoring outliers that might skew average calculations.
In the example above, we can see that 80% of the tasks were completed in less than a week.
Time on task and average time on task by size
These charts offer detailed insights into the time spent on tasks.
Time on task
The time on task chart highlights areas where completion times are within the expected range and where there might be unusually long tasks.
Y-axis — days to complete the task
X-axis — month when the task was created
Bubble size — total words from the task's target languages
Each bubble represents a task for a target language. The higher the bubble, the longer it took to complete that language in the task, while the size of the bubble indicates the number of words translated or reviewed.
Average time on task by task size
Y-axis — days to complete the task
X-axis — month when the task was created
Series — tasks categorized into three buckets based on the sum of words from the task's target languages (0-20% percentile, 20-80%, 80-100%).
In the example above, we can see that over the past year, the time to complete medium-sized tasks has decreased by nearly half.
Average time on task per task size and language
Y-axis — human-readable name of the target language (grouped by name, regardless of language ID differences between projects).
X-axis — sum of words from a target language across all matching tasks.
Values — number of days to complete tasks (more days means a more saturated color).
In the example above, we can see that for medium-sized tasks, German translations are completed twice as fast as Italian and Turkish translations.
Average time on task per task size and contributor
Y-axis — name of the target language assignee (select the person who closed the task if multiple contributors were involved; grouped by name even if assignee IDs differ between projects).
X-axis — sum of words from a target language across all matching tasks.
Values — number of days to complete tasks (more days result in a more saturated color).
In the example above, we can see that Jim typically completes his tasks very quickly, even the large-sized ones, whereas Ann generally takes more time to finish her tasks.
Task time overview
Please note that only the Enterprise customers have access to this table.
This table provides a summary of your tasks:
Task title — click to see more details about the task.
Project name
Status — possible statuses include created, queued (waiting for a prerequisite task), overdue, completed.
Type — task types such as translation, review, AI translate, AI LQA.
Base language
Target language
Words — number of weighted words.
Repetitions — words repeated within the task.
Keys — words analyzed from the task.
Start date, Due date, Completion date
Completed by — language assignee who closed the task.
Time to complete — duration to finish the task.
Special notes
Translation methods
A translation method is identified as the source of the first substantial translation of a key.
For example:
The first entry for the German language (at the bottom) is just an empty translation added when a key was initially created. This empty translation is ignored.
The second change, which fills the translation with content, is considered the translation method for the current key. In this case, it would be attributed to translation memory.
Any subsequent edits do not affect the translation method, as the key was initially translated by translation memory. These are treated as regular edits.
Lokalise recognizes the following translation methods:
Translation memory — translations extracted from memory during imports, bulk actions, or automation. Suggestions from the right-side panel are not included.
Human translation — translations manually performed in the editor, including using suggestions from the right-side panel or ordering professional translations through Lokalise or Gengo.
Machine translation — machine translations applied via bulk actions, automation, or by pressing the Google-translate button for empty values in the editor. Right-side panel suggestions are not included.
Lokalise AI — translations conducted automatically by Lokalise AI, excluding suggestions from the right-side panel.
API — translations set through Lokalise APIv2.
Offline — translations made offline and uploaded via an XLIFF file.
Other — includes all other activities, such as copying keys between projects, pseudolocalization, find and replace, and restoring translations from history.
Meaningful edits
An edit is considered meaningful if it meets all the following criteria:
It arises from other translation method events (excluding “Other” and “Import“).
It occurs within 90 days to ensure data consistency.
It involves at least a minor change that has been actually applied.
The translation is non-empty post-edit.
Technical alterations (like find and replace) or simply clearing translations (manually or in bulk) are not considered meaningful.