Handling Natural Languages
On this page
Key Takeaways
- The engine supports all languages: It simply matches the text typed in the search bar with the text found in the index.
- If you want the engine to recognize plurals and stop words like “the”, “a”, “of”, etc., you’ll need to specify the language used in your index.
- The engine uses language-specific dictionaries to remove stop words, detect declined or pluralized forms, separate compound words, and to handle Asian logograms (CJK), Arabic vowels, and diacritics.
Algolia supports all languages
The Algolia engine supports all languages out of the box. It is language agnostic; it matches the text in the search bar with the text in the index. This is what we call textual matching.
For example, if you have an index with only English text, and a user searches with Japanese text, the engine won’t return any results because the Latin alphabet doesn’t match Japanese characters. If your users are searching in Japanese, your index should likely contain Japanese text. If you want to support multiple languages, the most common solution is to create one index per language.
To support all languages, we use a wide array of natural language techniques. These techniques range from general, such as finding words or using Unicode, to specific, including distinguishing letters from logograms, breaking down compound words, and using single-language dictionaries for vocabulary.
We’ve organized this page around five natural language understanding strategies:
- Engine-level processing (normalization)
- Dictionaries
- Query Processing
- Natural Language Processing (NLP) with Rules
- Configuring Typo Tolerance
Some language-based techniques (such as normalization) play an integral role in the engine, and are performed with every indexing and search operation. These are generally not configurable. Other techniques rely on specialized dictionaries, which facilitate word and word-root detection. These come with several configurable options. Finally, Algolia offers many other techniques (like typo tolerance, Rules) that can be enabled, disabled, or fine-tuned according to the use case. These are also configurable using Algolia’s API settings and suggested best practices.
How the engine normalizes data
The engine performs normalization both at indexing and query time, ensuring consistency in how your data is represented as well as matched.
You can’t globally disable normalization, but you can do it for certain special characters using the keepDiacriticsOnCharacters
setting. Additionally, our normalization process is language-agnostic—what we do for one language we do for all.
What does normalization mean?
- Turn all characters to lower case
- Remove all diacritics (e.g., accents, umlauts, Arabic short vowels)-however, you can define diacritics to keep using the
keepDiacriticsOnCharacters
index setting - Remove punctuation within words (e.g., apostrophes)
- Manage punctuation between words
- Use word separators (such as spaces, but not only)
- Include or exclude non-alphanumeric characters (separatorsToIndex)
- Transform traditional Chinese to modern
Some of these actions, such as removing punctuation within words, managing punctuation between words, and handling non-alphanumeric characters in general are part of the tokenization process. Understanding how we handle this process is key to understanding how we concatenate and split words at indexing and query time.
Adding Language Specificity using Dictionaries
As already suggested, some of our automated techniques don’t work in all languages. Here we discuss how we add language-specific functionality to the engine.
No automatic language detection
Algolia doesn’t attempt to detect the language of your records nor the language of your end users as they type in queries.
Therefore, to benefit from our language-specific algorithms, you need to tell the engine in what language you want your records to be interpreted.
- If you don’t pick a language, we assume that you want to support all languages. The drawback here is that you create ambiguities by mixing every language’s peculiarities. For example, plurals in Italian are applied to plurals in English, causing problems such as the following: “paste”, the plural of “pasta” in Italian, will also be considered the plural of “pasta” in English, which is not the case, as “paste” in English is a word in its own right (to spread).
- It’s okay mix two or three languages in a single index, and specify them in your settings. However, you should prepare your indices and records appropriately. For more on this, please refer to our multiple languages tutorial.
Even though the engine can do most tasks without knowing the language of an index, there are some tasks that require knowledge of the language. For example, the engine can only compare plural to singular forms by knowing the language. The same applies to removing small words like “to” and “the” (stop words).
Because the default language of an index is all languages, enabling removeStopWords or ignorePlurals without setting an index’s language will ignore the wrong plurals and remove the wrong stop words. It is therefore very important to set the query languages of all your indices.
Using dictionaries
Several of the language-related methods require the use of dictionaries. With dictionaries, the engine can apply language-specific, word-based logic to your data and your end user’s queries. Algolia maintains separate, language-specific dictionaries for:
- removing stop words
- detecting pluralized and other declined forms (alternative forms of words due to number, case, or gender)
- splitting compound words (also known as decompounding)
- handling Asian logograms (CJK)
Algolia provides default dictionaries for all supported query languages. While Algolia regularly updates these dictionaries, you can also customize the stop words dictionaries, declensions dictionaries, and decompounding dictionaries for your use case.
Disabling typo tolerance and prefix search on specific words
The advancedSyntax
parameter lets you disable typo tolerance on specific words in a query by using double quotes. For example, the query “foot problems” is typo tolerant on both query words, while ““foot” problems” is only typo tolerant on “problems”.
This parameter also disables prefix searching on words inside the double quotes.
Typo Tolerance and Languages
What is a typo?
- A missing letter in a word, “hllo” → “hello”
- An extraneous letter, “heello” → “hello”
- Inverted letters: “hlelo” → “hello”
- Substituted letter: “heilo” → “hello”
Typo tolerance allows users to make mistakes while typing and still find the words they are looking for. This is done by matching words that are close in spelling.
Other spelling errors
We do not count extra or missing spaces and punctuation as typos, but we only handle them if typoTolerance
is enabled (set to true
, min
or strict
). For example:
- A missing space between two words, that we handle with splitting: “helloworld” → “hello world”
- An extraneous space or punctuation, that we handle with concatenation: “hel lo” → “hello”
Typos as language-dependent
As you can see, we use English to illustrate the principle. That works because English is phonemic: it uses single characters to represent sounds to form a word. It makes spelling errors possible.
We don’t support typo-tolerance for logogram-based languages (such as Chinese, Japanese, Korean, and Vietnamese), as these languages use pictorial characters to represent partial or full words instead of single letters to represent sounds.
For alphabet-based and phonemic languages (English, French, Russian, etc.), we offer many ways to configure the engine to improve typo tolerance:
Natural Language Processing with Rules
You can set up Rules that tell the engine to look for specific words or phrases in a query, and take a specific action or change its default behavior when it finds them.
For example, the engine can convert some query terms into filters. If a user types in a filter value—say, “red”—you can use this term as a filter instead of a search term. With the query “red dress”, then the engine could therefore only look at the “red” records (based on a filter attribute) for the word “dress”. Removing filter values from the query string and using them directly as filters is called dynamic filtering.
Dynamic filtering is only one way that Rules can understand and detect the intent of the user.