Skip to main content
Skip to main content
Edit this page

Working with JSON in ClickHouse

This guide provides common patterns for working with JSON data replicated from MongoDB to ClickHouse via ClickPipes.

Suppose we created a collection t1 in MongoDB to track customer orders:

MongoDB CDC Connector replicates MongoDB documents to ClickHouse using the native JSON data type. The replicated table t1 in ClickHouse will contain the following row:

Table schema

The replicated tables use this standard schema:

  • _id: Primary key from MongoDB
  • _full_document: MongoDB document replicated as JSON data type
  • _peerdb_synced_at: Records when the row was last synced
  • _peerdb_version: Tracks the version of the row; incremented when the row is updated or deleted
  • _peerdb_is_deleted: Marks whether the row is deleted

ReplacingMergeTree table engine

ClickPipes maps MongoDB collections into ClickHouse using the ReplacingMergeTree table engine family. With this engine, updates are modeled as inserts with a newer version (_peerdb_version) of the document for a given primary key (_id), enabling efficient handling of updates, replaces, and deletes as versioned inserts.

ReplacingMergeTree clears out duplicates asynchronously in the background. To guarantee the absence of duplicates for the same row, use the FINAL modifier. For example:

Handling deletes

Deletes from MongoDB are propagated as new rows marked as deleted using the _peerdb_is_deleted column. You typically want to filter these out in your queries:

You can also create a row-level policy to automatically filter out deleted rows instead of specifying the filter in each query:

Querying JSON data

You can directly query JSON fields using dot syntax:

Dynamic type

In ClickHouse, each field in JSON has Dynamic type. Dynamic type allows ClickHouse to store values of any type without knowing the type in advance. You can verify this with the toTypeName function:

To examine the underlying data type(s) for a field, you can check with the dynamicType function. Note that it's possible to have different data types for the same field name in different rows:

Regular functions work for dynamic type just like they do for regular columns:

Example 1: Date parsing

Example 2: Conditional logic

Example 3: Array operations

Field casting

Aggregation functions in ClickHouse don't work with dynamic type directly. For example, if you attempt to directly use the sum function on a dynamic type, you get the following error:

To use aggregation functions, cast the field to the appropriate type with the CAST function or :: syntax:

Note

Casting from dynamic type to the underlying data type (determined by dynamicType) is very performant, as ClickHouse already stores the value in its underlying type internally.

Flattening JSON

Normal view

You can create normal views on top of the JSON table to encapsulate flattening/casting/transformation logic in order to query data similar to a relational table. Normal views are lightweight as they only store the query itself, not the underlying data. For example:

This view will have the following schema:

You can now query the view similar to how you would query a flattened table:

Refreshable materialized view

You can also create Refreshable Materialized Views, which enable you to schedule query execution for deduplicating rows and storing the results in a flattened destination table. With each scheduled refresh, the destination table is replaced with the latest query results.

The key advantage of this method is that the query using the FINAL keyword runs only once during the refresh, eliminating the need for subsequent queries on the destination table to use FINAL.

However, a drawback is that the data in the destination table is only as up-to-date as the most recent refresh. For many use cases, refresh intervals ranging from several minutes to a few hours provide a good balance between data freshness and query performance.

You can now query the table flattened_t1 directly without the FINAL modifier: