Lokalise Analytics delivers precise insights that help you make informed decisions, turning translation uncertainties into clear choices. This allows you to save time and better allocate your resources.
With Analytics, you can:
Monitor task efficiency by language.
Get an overview of project performance.
Track task assignments.
Analyze task volume and average word counts.
Measure translated word volume and completion rates by language.
Gain insights into different translation methods, including human translations, AI-driven translations, and those using translation memory.
Evaluate the efficiency and accuracy of different translation approaches.
Track and analyze time spent on tasks to spot inefficiences and bottlenecks, and avoid delays.
Accessing Lokalise Analytics
To get started with Lokalise Analytics, click the icon in the left-hand menu:
Keep in mind that you'll only see data for the team you've currently selected. To switch teams, click on your avatar in the bottom left corner and choose a different team from the menu.
Volume dashboard
Accessing the Volume dashboard
The Volume dashboard helps you predict future costs and resource needs, allowing you to optimize your business operations. Here, you’ll find an overview of your past data, including the volume of words translated.
To access the Volume dashboard, switch to the corresponding tab on the Analytics page:
Filtering data
You can filter your data using Date and Target language filters.
Customers on the Enterprise plan have access to additional filters, allowing you to filter by project, project tags, or key tags.
For example, you can filter by key tag to get a detailed view of translation volumes for specific content types, such as software strings, customer support guides, or marketing materials like announcement emails. Alternatively, use project tags to monitor the translation volume across different products within your organization’s portfolio.
Forecasting resource requirements and predicting future costs
You may want to check the Processed words article to understand how Lokalise measures usage.
Using the Volume dashboard, you can predict future costs and resource requirements to optimize business operations.
Translation methods overview
The Processed words overview table displays the number of processed words and translation methods used for each language, based on the selected date range in your filters.
Columns:
Language — click a language name to filter by it. If multiple projects use the same language name but different IDs, they appear as one language.
Processed words — this counts the number of processed words for the language.
Translation methods — these columns show how many words were translated using each method, including Pro AI and Standard AI/MT.
Base processed words by month
The Processed words by month chart shows how many words were actively processed in your project each month. This includes words added or updated in the base language, as well as any target-language words modified through translations, AI/MT actions, imports, or automations.
Chart details:
Dark blue — processed words originating from base language changes (new content or updates).
Light blue — total processed words for the month, including target-language words generated or modified through translation activity.
The word count is cumulative, even if your team uses multiple base languages.
Processed words translated per month
The Processed words translated per month chart breaks down translation methods for each target language.
Compare translation methods
These graphs help you evaluate and choose the most effective translation method based on performance. They answer the question, "How can I determine the best translation method for my needs?"
Base processed words by translation method
This graph shows the number of base processed words grouped by translation methods.
Edit rate by translation method
The key metric for assessing translation quality is the edit rate. The Edit rate by translation method chart shows the monthly edit rates for various translation methods. It indicates how many keys were edited by a reviewer—the fewer keys edited, the better the initial translation quality.
In this example, about 29% of translations done by Pro AI were edited in November:
The edit rate is calculated as the ratio of meaningfully changed translations to the total number of keys translated using a specific method.
Edits are attributed to the date of the original translation, not the date of the edit.
Translation methods | Meangiful edits | Edit rate |
Key translated by MT #1 |
| 2 / 5 = 40% |
Key translated by API #1 |
| 1 / 3 = 33% |
Key translated by Humans #1 | EDIT #1 | 1 / 1 = 100% |
Processed words by project and action
This graph shows the number of created and updated processed words on per-project basis.
Processed words by project and method
This graph shows the number of processed words on per-project basis spread by the translation method.
Translation method breakdown
Each translation method has its own section that shows how many translations were created or updated over time, along with the edit rate for that method. This helps you compare Pro AI, Standard AI/MT, Translation Memory, and Human translation to understand their impact on speed, cost, and quality.
The metrics included are:
Translations created — the number of new translations generated using this method.
Translations updated — translations that were modified after creation.
Edit rate — the percentage of translations that required updates.
Created vs updated chart — a monthly breakdown showing how much content was newly generated vs revised.
These charts allow you to quickly evaluate the performance of each translation method.
Tasks dashboard
Accessing the Tasks dashboard
The Tasks dashboard allows you to track and analyze the time spent on various tasks, helping you improve productivity and efficiency.
Task metrics are presented using Trends visualization, which displays current metrics alongside comparisons to a previous period.
Filtering data
Please note that only Enterprise plan customers have access to task, task type, project tag, and contributor filters. Speak with us.
Within the Tasks dashboard, you can apply the following filters:
Date — filters by the date range when tasks were created.
Target language — filters tasks involving specific languages and calculates "words" only for those target languages.
Task — displays only the selected tasks.
Task type — filters by task types; AI types are excluded by default.
Project — filters tasks from selected projects.
Project ID — filters tasks from selected project IDs.
Contributor — includes tasks assigned to a specific contributor and counts words only from languages to which that contributor is assigned.
Project tag — includes tasks from all projects with the selected tags.
Key tags — shows tasks containing only keys with specified tags.
Basic tasks metrics
There are a few basic metrics under the Tasks tab:
Tasks created — number of tasks created within a period.
Tasks closed — number of tasks closed within a period.
Overdue active tasks — tasks created within the date range that are overdue and not yet completed.
Overdue completed tasks — tasks completed within the date range that were overdue at the time of completion.
Track and analyze the time spent on tasks
Tasks time overview
The Tasks time section provides comprehensive insights into the time taken from task creation to completion, excluding tasks that are still in progress. This tool enables you to monitor and analyze how time is spent on various tasks, helping you identify inefficiencies and bottlenecks that could impact productivity. By understanding these patterns, you can make informed decisions to optimize workflows and prevent delays.
Additionally, this feature supports more accurate future planning by using past data to predict task durations. It addresses the crucial question: "How can I effectively track and analyze time spent on tasks to improve both productivity and efficiency?"
The displayed data includes:
Average — the arithmetic average time to complete a task.
Median — the median time it takes to complete tasks.
80th percentile — the time under which 80% of tasks are completed, reflecting the Pareto ratio.
Longest time — the maximum time taken to complete a task.
Average words per day — the number of words from target languages divided by the hours it took to complete a language, averaged out.
Note on task time
For human tasks, time is measured in days and hours. For AI-powered tasks, time is measured in hours and seconds, as these tasks are typically much faster.
Time is always counted from the moment the task is created.
Lokalise calculates time per language, not per overall task. For example, if a task includes two languages—one completed in 1 day, and the other in 7 days—the average time is shown as 4 days, based on the individual language durations, not the total time until the task as a whole is closed.
Note on the 80th percentile
We recommend using the 80th percentile instead of average time when analyzing how long it takes to complete translations. This metric shows the time needed to complete 80% of tasks, providing a more reliable measure that accounts for past data while ignoring outliers that might skew average calculations.
In the example above, we can see that 80% of the tasks were completed in less than a week.
Time on task and average time on task by size
These charts offer detailed insights into the time spent on tasks.
Time on task
The time on task chart highlights areas where completion times are within the expected range and where there might be unusually long tasks.
Y-axis — days to complete the task
X-axis — month when the task was created
Bubble size — total words from the task's target languages
Each bubble represents a task for a target language. The higher the bubble, the longer it took to complete that language in the task, while the size of the bubble indicates the number of words translated or reviewed.
Average time on task by task size
Y-axis — days to complete the task
X-axis — month when the task was created
Series — tasks categorized into three buckets based on the sum of words from the task's target languages (0-20% percentile, 20-80%, 80-100%).
In the example above, we can see that over the past year, the time to complete medium-sized tasks has decreased by nearly half.
Average time on task per task size and language
Y-axis — human-readable name of the target language (grouped by name, regardless of language ID differences between projects).
X-axis — sum of words from a target language across all matching tasks.
Values — number of days to complete tasks (more days means a more saturated color).
In the example above, we can see that for medium-sized tasks, German translations are completed twice as fast as Italian and Turkish translations.
Average time on task per task size and contributor
Y-axis — name of the target language assignee (select the person who closed the task if multiple contributors were involved; grouped by name even if assignee IDs differ between projects).
X-axis — sum of words from a target language across all matching tasks.
Values — number of days to complete tasks (more days result in a more saturated color).
In the example above, we can see that Jim typically completes his tasks very quickly, even the large-sized ones, whereas Ann generally takes more time to finish her tasks.
Detailed task data
Please note that only the Enterprise customers have access to this table. Speak with us.
The Detailed task data report provides an in-depth view of all tasks, broken down by target language and contributor. It enables precise analysis of translation, review, and AI task performance across your localisation workflows.
Each row in this report represents a unique combination of task + target language + contributor. For example, if a task includes two target languages and two contributors (one per language), the report will display four rows (2 languages × 2 contributors).
Columns included:
Task ID — unique identifier of the task.
Title — title of the task.
Project — name of the project where the task resides.
Type — task type (Translation, Review, AI task, etc.).
Source language — language from which translation or review is performed.
Target language — target language for the specific row.
Created date — date when the task was created.
Due date — deadline set by the task creator.
Completion date — date when the task was marked as closed.
Status — current task status (e.g., Completed, In progress).
Keys — total number of keys included in the task.
Base words — number of base (source) words included in the task.
Processed words — number of words actively handled during task processing, across both base and target languages. This metric includes words processed through manual edits, imports, AI/MT, and automation.
Learn more in the Proccessed words article.
Time to complete — time elapsed between creation and closure of the task.
Created by — user who created the task.
Completed by — contributor who completed the work for that specific language.
Closed by — user who officially closed the task.
TM 0% — number of base words in the task with a 0–49% translation memory (TM) match.
TM 50% — number of base words with a 50–74% TM match.
TM 75% — number of base words with a 75–84% TM match.
TM 85% — number of base words with an 85–94% TM match.
TM 95% — number of base words with a 95–99% TM match.
TM 100% — number of base words with a 100% TM match.
How to use it
Use filters (by project, contributor, language, or date range) to narrow down specific workflows or teams.
Export the data to spreadsheets or BI tools for further analysis — for example, to calculate average completion time per contributor, language, or TM reuse percentage.
For large localisation programs, use this detailed dataset as a foundation for custom dashboards (e.g., completion time per language, TM efficiency per contributor) instead of relying solely on high-level summaries.
Important notes on Detailed task data report
Because rows are split by language and contributor, tasks that include multiple languages or multiple assignees will appear as multiple rows — one for each unique combination.
The Completed by column always shows the contributor responsible for that specific language entry.
Special notes
Translation methods
A translation method indicates how a key was first translated. It is determined by the source of the first meaningful translation added to the key.
For example:
In the example above, the first entry for the German language (at the bottom) is an empty translation created when the key was initially added. Empty translations are ignored.
The next change adds actual content to the translation. This is considered the translation method for that key. In this example, the method would be translation memory.
Any later edits do not change the translation method. Since the key was first translated using translation memory, that method remains assigned to the key. Later changes are treated as regular edits.
Lokalise tracks the following translation methods:
Translation memory — translations extracted from memory during imports, bulk actions, or automation. Suggestions from the right-side panel are not included.
Human translation — еranslations entered manually in the editor. This also includes using suggestions from the right-side panel or ordering professional translations through Lokalise or Gengo.
Machine translation — machine translations applied via bulk actions, automation, or by pressing the Google-translate button for empty values in the editor. Right-side panel suggestions are not included.
AI — translations generated automatically by AI, excluding suggestions from the right-side panel.
API — translations added through Lokalise APIv2.
Offline — translations made offline and uploaded via an XLIFF file.
Other — all other activities, including copying keys between projects, pseudolocalization, find and replace, and restoring translations from history.
Meaningful edits
An edit is considered meaningful if it meets all of the following conditions:
It results from a translation method event (excluding Other and Import).
It occurs within 90 days, to maintain data consistency.
It includes at least a minor change that was actually applied to the translation.
The translation remains non-empty after the edit.
The following actions are not considered meaningful edits:
Technical changes (such as find and replace)
Opening a translation without making any changes
Clearing a translation, either manually or in bulk
These actions do not modify the translation content in a way that is considered meaningful for reporting purposes.
Metric differences across Tasks, Analytics, and Statistics
Word-count metrics across Tasks, Analytics, and Statistics may differ because each view represents a different part of the workflow. Some metrics reflect task scope, some reflect processed content, and others reflect translation or review activity.
In addition, these views may use different counting methods. For example, Statistics may attribute translation activity using the source-language word count, while Analytics may reflect words that were actually created or modified during processing. Counts may also be based on source-language words or target-language words, which do not always produce the same totals.
Because of this, these metrics should be interpreted in context rather than compared as exact equivalents.
Reviewed words vs. edit rate
A high number of reviewed words does not imply a high edit rate. Reviewed words show how much content went through review, while edit rate shows how much of that content was actually changed.
This means a user may review a large amount of content, confirm most of it as-is, and edit only a small portion. In that case, reviewed-word totals will be high, while edit rate will remain low.
Using existing keys when testing translations (and ensuring data accuracy)
For accurate analytics, avoid copying both source and target translations when testing. Instead, create a new project, upload only the source texts, and apply translations there.
When testing translations in Lokalise — whether using AI, machine translation (MT), or human translation — it’s important to prepare the project in a way that allows Analytics to track translation activity correctly. This includes metrics such as translation methods, post-editing effort, and overall workflow insights.
Not recommended
Copying both source and target content from one project to another is not recommended for testing or proof-of-concept scenarios. If both values are copied without changes, Analytics may retain the original translation method associated with those translations.
As a result, any new translations applied later may not be tracked as expected. This can lead to incomplete or misleading data when analyzing translation methods, post-editing effort, or workflow performance.
Recommended
To ensure clean and reliable analytics data, the recommended approach is to create a new project, upload only the source texts, and generate translations within that project. This allows Analytics to recognize all translation activity as new and track it accurately.
If both source and target content have already been copied, modifying the source text may help trigger proper tracking. However, for the most reliable results (especially when testing AI workflows or comparing translation approaches) it's best to start with source-only content in a new project.
Known limitations
Data availability
Data updates: Once per day.
Data availability: Covers the last 3 years (37 months).
Volume dashboard: New customers will see the Volume dashboard the day after they create or import at least 10 keys.
Tasks dashboard: New customers will see the Tasks dashboard the day after they create their first task.
Volume calculation rules
Base language requirement:
Volume data includes only translations that originate from the base language.
For example, if a project uses English as the base language, translating German into Italian will not appear in the Volume dashboard.
If the base language does not reflect the imported content, we recommend using Tasks and setting the correct language (for example, German) as the reference language. This ensures the translation workload is calculated correctly.
Branches exclusion:
Branches are excluded from both the Volume and Tasks dashboards.
Translations and tasks created in branches are not counted as volume edits and do not appear in the Tasks dashboard.
When a branch is merged into the main project, the translation volume from that branch is attributed to the Other translation method.
Filters with short date ranges
Lokalise Analytics allows filtering by date ranges shorter than a full month. However, the feature is optimized for monthly filtering. When selecting shorter periods (for example, a week), the displayed data may be less accurate.






















