Skip to main content

Lokalise Analytics

Lokalise Analytics gives you clear insights, turning translation guesswork into informed decisions that help you save time and resources.

Ilya Krukowski avatar
Written by Ilya Krukowski
Updated over 2 weeks ago

Lokalise Analytics delivers precise insights that help you make informed decisions, turning translation uncertainties into clear choices. This allows you to save time and better allocate your resources.

With Analytics, you can:

Accessing Lokalise Analytics

To get started with Lokalise Analytics, click the icon in the left-hand menu:

View image

Keep in mind that you'll only see data for the team you've currently selected. To switch teams, click on your avatar in the bottom left corner and choose a different team from the menu.


Volume dashboard

Accessing the Volume dashboard

The Volume dashboard helps you predict future costs and resource needs, allowing you to optimize your business operations. Here, you’ll find an overview of your past data, including the volume of words translated.

To access the Volume dashboard, switch to the corresponding tab on the Analytics page:

Filtering data

You can filter your data using Date and Target language filters.

Customers on the Enterprise plan have access to additional filters, allowing you to filter by project, project tags, or key tags.

For example, you can filter by key tag to get a detailed view of translation volumes for specific content types, such as software strings, customer support guides, or marketing materials like announcement emails. Alternatively, use project tags to monitor the translation volume across different products within your organization’s portfolio.

Forecasting resource requirements and predicting future costs

You may want to check the Processed words article to understand how Lokalise measures usage.

Using the Volume dashboard, you can predict future costs and resource requirements to optimize business operations.

Translation methods overview

The Processed words overview table displays the number of processed words and translation methods used for each language, based on the selected date range in your filters.

Columns:

  • Language — click a language name to filter by it. If multiple projects use the same language name but different IDs, they appear as one language.

  • Processed words — this counts the number of processed words for the language.

  • Translation methods — these columns show how many words were translated using each method, including Pro AI and Standard AI/MT.

Base processed words by month

The Processed words by month chart shows how many words were actively processed in your project each month. This includes words added or updated in the base language, as well as any target-language words modified through translations, AI/MT actions, imports, or automations.

Chart details:

  • Dark blue — processed words originating from base language changes (new content or updates).

  • Light blue — total processed words for the month, including target-language words generated or modified through translation activity.

The word count is cumulative, even if your team uses multiple base languages.

Processed words translated per month

The Processed words translated per month chart breaks down translation methods for each target language.

Compare translation methods

These graphs help you evaluate and choose the most effective translation method based on performance. They answer the question, "How can I determine the best translation method for my needs?"

Base processed words by translation method

This graph shows the number of base processed words grouped by translation methods.

Edit rate by translation method

The key metric for assessing translation quality is the edit rate. The Edit rate by translation method chart shows the monthly edit rates for various translation methods. It indicates how many keys were edited by a reviewer—the fewer keys edited, the better the initial translation quality.

In this example, about 29% of translations done by Pro AI were edited in November:

The edit rate is calculated as the ratio of meaningfully changed translations to the total number of keys translated using a specific method.

Edits are attributed to the date of the original translation, not the date of the edit.

Translation methods

Meangiful edits

Edit rate

Key translated by MT #1
Key translated by MT #2
Key translated by MT #3
Key translated by MT #4
Key translated by MT #5




EDIT #4
EDIT #5

2 / 5 = 40%

Key translated by API #1
Key translated by API #2
Key translated by API #3


EDIT #2

1 / 3 = 33%

Key translated by Humans #1

EDIT #1

1 / 1 = 100%

Processed words by project and action

This graph shows the number of created and updated processed words on per-project basis.

Processed words by project and method

This graph shows the number of processed words on per-project basis spread by the translation method.

Translation method breakdown

Each translation method has its own section that shows how many translations were created or updated over time, along with the edit rate for that method. This helps you compare Pro AI, Standard AI/MT, Translation Memory, and Human translation to understand their impact on speed, cost, and quality.

The metrics included are:

  • Translations created — the number of new translations generated using this method.

  • Translations updated — translations that were modified after creation.

  • Edit rate — the percentage of translations that required updates.

  • Created vs updated chart — a monthly breakdown showing how much content was newly generated vs revised.

These charts allow you to quickly evaluate the performance of each translation method.


Tasks dashboard

Accessing the Tasks dashboard

The Tasks dashboard allows you to track and analyze the time spent on various tasks, helping you improve productivity and efficiency.

View image

Task metrics are presented using Trends visualization, which displays current metrics alongside comparisons to a previous period.

Filtering data

Please note that only Enterprise plan customers have access to task, task type, project tag, and contributor filters. Speak with us.

Within the Tasks dashboard, you can apply the following filters:

  • Date — filters by the date range when tasks were created.

  • Target language — filters tasks involving specific languages and calculates "words" only for those target languages.

  • Task — displays only the selected tasks.

  • Task type — filters by task types; AI types are excluded by default.

  • Project — filters tasks from selected projects.

  • Project ID — filters tasks from selected project IDs.

  • Contributor — includes tasks assigned to a specific contributor and counts words only from languages to which that contributor is assigned.

  • Project tag — includes tasks from all projects with the selected tags.

  • Key tags — shows tasks containing only keys with specified tags.

Basic tasks metrics

There are a few basic metrics under the Tasks tab:

  • Tasks created — number of tasks created within a period.

  • Tasks closed — number of tasks closed within a period.

  • Overdue active tasks — tasks created within the date range that are overdue and not yet completed.

  • Overdue completed tasks — tasks completed within the date range that were overdue at the time of completion.

Track and analyze the time spent on tasks

Tasks time overview

The Tasks time section provides comprehensive insights into the time taken from task creation to completion, excluding tasks that are still in progress. This tool enables you to monitor and analyze how time is spent on various tasks, helping you identify inefficiencies and bottlenecks that could impact productivity. By understanding these patterns, you can make informed decisions to optimize workflows and prevent delays.

Additionally, this feature supports more accurate future planning by using past data to predict task durations. It addresses the crucial question: "How can I effectively track and analyze time spent on tasks to improve both productivity and efficiency?"

The displayed data includes:

  • Average — the arithmetic average time to complete a task.

  • Median — the median time it takes to complete tasks.

  • 80th percentile — the time under which 80% of tasks are completed, reflecting the Pareto ratio.

  • Longest time — the maximum time taken to complete a task.

  • Average words per day — the number of words from target languages divided by the hours it took to complete a language, averaged out.

Note on task time

  • For human tasks, time is measured in days and hours. For AI-powered tasks, time is measured in hours and seconds, as these tasks are typically much faster.

  • Time is always counted from the moment the task is created.

  • Lokalise calculates time per language, not per overall task. For example, if a task includes two languages—one completed in 1 day, and the other in 7 days—the average time is shown as 4 days, based on the individual language durations, not the total time until the task as a whole is closed.

Note on the 80th percentile

We recommend using the 80th percentile instead of average time when analyzing how long it takes to complete translations. This metric shows the time needed to complete 80% of tasks, providing a more reliable measure that accounts for past data while ignoring outliers that might skew average calculations.

In the example above, we can see that 80% of the tasks were completed in less than a week.

Time on task and average time on task by size

These charts offer detailed insights into the time spent on tasks.

Time on task

The time on task chart highlights areas where completion times are within the expected range and where there might be unusually long tasks.

  • Y-axis — days to complete the task

  • X-axis — month when the task was created

  • Bubble size — total words from the task's target languages

Each bubble represents a task for a target language. The higher the bubble, the longer it took to complete that language in the task, while the size of the bubble indicates the number of words translated or reviewed.

Average time on task by task size

  • Y-axis — days to complete the task

  • X-axis — month when the task was created

  • Series — tasks categorized into three buckets based on the sum of words from the task's target languages (0-20% percentile, 20-80%, 80-100%).

In the example above, we can see that over the past year, the time to complete medium-sized tasks has decreased by nearly half.

Average time on task per task size and language

  • Y-axis — human-readable name of the target language (grouped by name, regardless of language ID differences between projects).

  • X-axis — sum of words from a target language across all matching tasks.

  • Values — number of days to complete tasks (more days means a more saturated color).

In the example above, we can see that for medium-sized tasks, German translations are completed twice as fast as Italian and Turkish translations.

Average time on task per task size and contributor

  • Y-axis — name of the target language assignee (select the person who closed the task if multiple contributors were involved; grouped by name even if assignee IDs differ between projects).

  • X-axis — sum of words from a target language across all matching tasks.

  • Values — number of days to complete tasks (more days result in a more saturated color).

In the example above, we can see that Jim typically completes his tasks very quickly, even the large-sized ones, whereas Ann generally takes more time to finish her tasks.

Detailed task data

Please note that only the Enterprise customers have access to this table. Speak with us.

The Detailed task data report provides an in-depth view of all tasks, broken down by target language and contributor. It enables precise analysis of translation, review, and AI task performance across your localisation workflows.

Each row in this report represents a unique combination of task + target language + contributor. For example, if a task includes two target languages and two contributors (one per language), the report will display four rows (2 languages × 2 contributors).

Columns included:

  • Task ID — unique identifier of the task.

  • Title — title of the task.

  • Project — name of the project where the task resides.

  • Type — task type (Translation, Review, AI task, etc.).

  • Source language — language from which translation or review is performed.

  • Target language — target language for the specific row.

  • Created date — date when the task was created.

  • Due date — deadline set by the task creator.

  • Completion date — date when the task was marked as closed.

  • Status — current task status (e.g., Completed, In progress).

  • Keys — total number of keys included in the task.

  • Base words — number of base (source) words included in the task.

  • Processed words — number of words actively handled during task processing, across both base and target languages. This metric includes words processed through manual edits, imports, AI/MT, and automation.

  • Time to complete — time elapsed between creation and closure of the task.

  • Created by — user who created the task.

  • Completed by — contributor who completed the work for that specific language.

  • Closed by — user who officially closed the task.

  • TM 0% — number of base words in the task with a 0–49% translation memory (TM) match.

  • TM 50% — number of base words with a 50–74% TM match.

  • TM 75% — number of base words with a 75–84% TM match.

  • TM 85% — number of base words with an 85–94% TM match.

  • TM 95% — number of base words with a 95–99% TM match.

  • TM 100% — number of base words with a 100% TM match.

How to use it

  • Use filters (by project, contributor, language, or date range) to narrow down specific workflows or teams.

  • Export the data to spreadsheets or BI tools for further analysis — for example, to calculate average completion time per contributor, language, or TM reuse percentage.

  • For large localisation programs, use this detailed dataset as a foundation for custom dashboards (e.g., completion time per language, TM efficiency per contributor) instead of relying solely on high-level summaries.

Important notes on Detailed task data report

  • Because rows are split by language and contributor, tasks that include multiple languages or multiple assignees will appear as multiple rows — one for each unique combination.

  • The Completed by column always shows the contributor responsible for that specific language entry.


Special notes

Translation methods

A translation method is identified as the source of the first substantial translation of a key.

For example:

  • The first entry for the German language (at the bottom) is just an empty translation added when a key was initially created. This empty translation is ignored.

  • The second change, which fills the translation with content, is considered the translation method for the current key. In this case, it would be attributed to translation memory.

  • Any subsequent edits do not affect the translation method, as the key was initially translated by translation memory. These are treated as regular edits.

Lokalise recognizes the following translation methods:

  • Translation memory — translations extracted from memory during imports, bulk actions, or automation. Suggestions from the right-side panel are not included.

  • Human translation — translations manually performed in the editor, including using suggestions from the right-side panel or ordering professional translations through Lokalise or Gengo.

  • Machine translationmachine translations applied via bulk actions, automation, or by pressing the Google-translate button for empty values in the editor. Right-side panel suggestions are not included.

  • AI — translations conducted automatically by AI, excluding suggestions from the right-side panel.

  • API — translations set through Lokalise APIv2.

  • Offline — translations made offline and uploaded via an XLIFF file.

  • Import — translations imported from any external source, including Lokalise APIv2, GitHub, Zendesk, etc.

  • Other — includes all other activities, such as copying keys between projects, pseudolocalization, find and replace, and restoring translations from history.

Meaningful edits

An edit is considered meaningful if it meets all the following criteria:

  • It arises from other translation method events (excluding “Other” and “Import“).

  • It occurs within 90 days to ensure data consistency.

  • It involves at least a minor change that has been actually applied.

  • The translation is non-empty post-edit.

Technical alterations (like find and replace), clicking into a translation but not making any change or simply clearing translations (manually or in bulk) are not considered meaningful.

Using existing keys when testing translations (and ensuring data accuracy)

TL;DR For accurate analytics, don't copy both source and target content when testing translations. Start fresh: create a new project, upload only source texts, and apply translations there.

When testing translations in Lokalise — whether you’re using AI, machine translation (MT), or human translation — it’s important to prepare your project so that Lokalise Analytics can track everything accurately. This includes details like translation methods, post-editing effort, and general workflow insights.

We don’t recommend copying both source and target content from one project to another when running tests or proofs of concept. If both source and target values are copied without changes, Analytics may carry over the original translation method.

Even if you apply new translations later, the system might not track those correctly, leading to incomplete or misleading data. This affects your ability to measure post-editing effort, compare translation approaches, or understand workflow efficiency.

The recommended way to get clean and reliable data is to set up a fresh project, upload only the source texts, and apply new translations there. This ensures Analytics recognizes all translation activity as new and tracks it properly.

If you’ve already copied both source and target content, making edits to the source may help trigger proper tracking. But for the most accurate results, especially when testing AI workflows or running comparisons, always start with source-only content in a new project.


Known limitations

Data availability

  • Data updates: Once per day.

  • Data availability: Covers the last 3 years (37 months).

  • Volume dashboard: New customers will see the Volume dashboard the day after they create or import at least 10 keys.

  • Tasks dashboard: New customers will see the Tasks dashboard the day after they create their first task.

Volume calculation rules

  • Base language requirement:

    • Volume data does not include translations without a base language.

    • Example: If a project has English as the base language, translating German into Italian won’t be reflected in the Volume dashboard.

    • If a base language cannot reflect imported content, we recommend using Lokalise Tasks and setting German as the reference language. This ensures that the volume of work is properly calculated.

  • Branches exclusion:

    • Branches are excluded from both the Volume and Tasks dashboards.

    • Translations and tasks created in branches are not counted in Volume edits and are fully excluded from the Tasks dashboard.

    • When a branch is merged into the main project, its translation volume is attributed to the "Other" translation method.

Filters with short date ranges

Lokalise Analytics allows you to select date ranges shorter than a full month. However, the feature is optimized for filtering by at least one month. When selecting shorter periods (e.g., a week), the displayed data may be inaccurate.

Did this answer your question?