This post is a follow up to my 1st post on for MongoDB. Now that we have established a strong basis for our product catalog, we are ready to dive into one the most important feature: Product Search.
This feature presents many challenges:
- Response within milliseconds for hundreds of items
- Faceted search on many attributes: category, brand, …
- Efficient sorting on several attributes: price, rating
- Pagination feature which requires deterministic ordering
Fortunately those challenges are not new, and software like Search Engines are built exactly for this purpose. In the following sections we will see how to use Search Engines with MongoDB, which one is best, and whether we can leverage MongoDB’s indexing itself.
Overall Architecture
The diagram below illustrates the traditional architecture:
It presents the following limitations:
- 3 different systems to maintain: RDBMS, Search engine, Caching layer
- RDBMS schema is complex and static
- Applications needs to talk many languages
The new architecture with MongoDB looks like this:
A few things are improved:
- With the API layer, the application just issues a single call, thus reducing complexity and latency
- No need for a caching layer since MongoDB’s read performance is close to Memcache and the likes
- The schema is dynamic and maps well to the API calls
Now on to the indexing part: how can the data from MongoDB be indexed into a Search Engine? There are many different ways to do this, but one easy option is to use Mongo-Connector.
What is the Mongo-Connector?
- Open-source Project at
- Python app that reads from MongoDB’s oplog and publishes the updates to a target of choice
- Supports initial sync by dumping the collections
- Default connectors for Solr, Elastic Search, or other MongoDB cluster
- Easily extensible to update other systems like SQL
So here we are, we have the connector installed on our Search Engine server, ready to start indexing from MongoDB. Wait, which data are we supposed to index?
The Source Data
In the first post , we devised models that are quite normalized. The goal was to make it more natural for queries and updates while avoiding the mega-document syndrome. But now, we are faced with searches that look like this:
Thus more challenges appear:
- Attributes at the variant level: color, size, etc
- Attributes from other docs: pricing, ratings, etc
- Display the matching variant’s image and details (e.g. red version of shoe)
- Dozens of matching variants for an item, still need to display a single item (Deduplication of results)
- Challenge to properly index the fields in Search Engines
This calls for a single document per item which would contain all necessary information, but omiting all fields that are not needed for browsing and searching. We will call this an Item Summary and use the following model:
Those documents can be updated as often as needed by a background process using custom code. The updates will make their way to MongoDB’s oplog, and we can tap into it using the Mongo-Connector.
Using Solr
After successful setup, your Solr install may looks like this:
The first and most difficult step with Solr is to define the proper indexing schema in schema.xml. Based on our model, such a definition may look like:
Now we are ready to start up the connector, using the single command line:
The connector will first dump the summary collection, indexing all documents. When done, it will continue processing anything coming to the Oplog. Looking at the Solr interface, a document looks like:
The exercise reveals several serious limitations:
- The wildcard to match dynamic fields can only appear at beginning or end. As a result we can’t match fields like “desc.*.val”
- Wildcard characters cannot be used with faceting. One would have to specify “attrs.0”, “attrs.1”, etc. Even then, facets are computed independently
- Basically JSON lists get flattened and that makes it very difficult to use. The only workaround is to somehow make use of named fields instead e.g. “attr_color”, “attr_size”.
- Mongo-Connector uses pySolr to convert MongoDB’s document to XML usable by Solr and it turns out to be quite slow (about 200 docs / s)
Using Elastic Search
After successful setup, your Elastic Search install may looks like this:
The nice thing about ES is that it conveniently understands the whole JSON document from MongoDB right off the bat :) The only change to indexing we have to do is to tell ES not to tokenize the facets. This can easily be done by submitting a new type mapping:
Everything else will get indexed auto-magically! Now we are ready to start up the connector, using the single command line:
At this point we can go to ES’s interface (Head plugin) or query through the command line:
Elastic Search gives us what we need:
- Facetting works as expected on the “attrs” list
- We have full text search on all elements of the “desc” list.
- Result includes MongoDB’s original document
- Query Results are automatically cached for faster future retrieval
- Mongo-Connector is much faster with ES (about 1000 docs / s)
Using MongoDB’s Indexing
One might wonder, how about not using a Search Engine at all? Doesn’t MongoDB have a full text search? Could we build faceted search too? The FTS of MongoDB is still a fairly new feature and has a number of limitations since it is purely based on a regular BTree:
- It slows down writes quite significantly (each word is an index entry)
- Index ends up being very large (no de-duplication of terms)
- Only supports latin languages
- No automatic caching of results
Now that we know the limitations, is it of any use? Well beyond the limitations, it actually does the job pretty well especially for smaller catalogs (in the GBs). We can create the following FTS index:
Then we can query it using the “$text” operator which is integrated into the main query syntax:
This returns fast but the results are in random order, which is not ideal. You can get the results sorted by matching score instead:
Note here that “limit()” is needed to avoid blowing up the RAM usage during sorting, which would make it abort. One of the limitation here is that since there is no query cache, querying for a very common word (other than stop words) will yield many results to sort for every call.
Now on to faceting, which can be tackled using a compound index. Here we will discuss faceting as a search feature rather than a count of facets. The following fields are of interest:
- department e.g. “Shoes”
- Category path, e.g. “Shoes/Women/Pumps”
- Price
- List of Item Attributes e.g. “Brand=Guess”, which includes Variant Attributes e.g. “Color=red”
- List of Item Secondary Attributes e.g. “Style=Designer”, which includes Variant Secondary Attributes, e.g. “heel height=4.0”. Those fields do not need to be indexed.
When thinking of a typical faceted query, it should include a department, a category, a price range, and a list of attributes:
What kind of indices do we need for this query? Ideally they should start with the department, which can be a left equality that must be specified (i.e. user won’t see facets until they pick a department). Then they should probably end with _id in order to allow faster deterministic ordering (useful for pagination). Beyond that, we need a mix of the fields so that MongoDB can always use an index that quickly narrow down results. For our purpose we will define the following indices:
- department + attrs + category + price + _id
- department + category + price + _id
- department + price + _id
The goal is not to have an absolute perfect coverage of all possibilities, since it would result in too many indices. But the indices above will make sure that MongoDB narrows down the documents to a short list in most cases. Now another question is for the “$all” operator, which attribute property is actually used to locate the index branch? Using “explain()” quickly reveals that the 1st item in the $all list is the most significant one:
MongoDB has no idea which value is more restrictive and it just assumes the 1st one will be good enough. Consequently the query is much faster if the 1st item in the $all list is the most restrictive one - all others will just be matched during index scan. Using static facet information from the catalog, the application can thus speed up the query by placing the facet value with the lowest number of items first:
In this case the “Ladder Material=Steel” should come first in the query.
Closing Comments
In conclusion, we’ve devised an ideal Product Search architecture and established a good source of data to be indexed. Then we compared different solutions for full-text searching and faceting, with the following outcome…
Search Engine advantages:
- Index size (~ 10x smaller than MongoDB’s)
- Indexing speed
- Query speed, integrated cache for complex queries
- All languages support
- Built-in faceted search, which includes facet counts
MongoDB’s Indexing advantages:
- Built-in the data store, no additional server / software needed
- Single query to get the results
- Facetting without text search can be faster when using the proper $all combination
In the current state, the winning combination for Product Search is Elastic Search, combined with MongoDB as Data Store! The MongoDB FTS will hopefully get better and better to cover larger use cases, but it will need other index structures than BTree to get close to Lucene-based search engines.