Skip to main content
Skip to main content

Laion-400M dataset

The Laion-400M dataset contains 400 million images with English image captions. Laion nowadays provides an even larger dataset but working with it will be similar.

The dataset contains the image URL, embeddings for both the image and the image caption, a similarity score between the image and the image caption, as well as metadata, e.g. the image width/height, the licence and a NSFW flag. We can use the dataset to demonstrate approximate nearest neighbor search in ClickHouse.

Data preparation

The embeddings and the metadata are stored in separate files in the raw data. A data preparation step downloads the data, merges the files, converts them to CSV and imports them into ClickHouse. You can use the following download.sh script for that:

Script process.py is defined as follows:

To start the data preparation pipeline, run:

The dataset is split into 410 files, each file contains ca. 1 million rows. If you like to work with a smaller subset of the data, simply adjust the limits, e.g. seq 0 9 | ....

(The python script above is very slow (~2-10 minutes per file), takes a lot of memory (41 GB per file), and the resulting csv files are big (10 GB each), so be careful. If you have enough RAM, increase the -P1 number for more parallelism. If this is still too slow, consider coming up with a better ingestion procedure - maybe converting the .npy files to parquet, then doing all the other processing with clickhouse.)

Create table

To create a table initially without indexes, run:

To import the CSV files into ClickHouse:

Note that the id column is just for illustration and is populated by the script with non-unique values.

To run a brute-force approximate vector search, run:

target is an array of 512 elements and a client parameter. A convenient way to obtain such arrays will be presented at the end of the article. For now, we can run the embedding of a random LEGO set picture as target.

Result

Run an approximate vector similarity search with a vector simialrity index

Let's now define two vector similarity indexes on the table.

The parameters and performance considerations for index creation and search are described in the documentation. The above index definition specifies a HNSW index using the "cosine distance" as distance metric with the parameter "hnsw_max_connections_per_layer" set to 64 and parameter "hnsw_candidate_list_size_for_construction" set to 256. The index uses half-precision brain floats (bfloat16) as quantization to optimize memory usage.

To build and materialize the index, run these statements :

Building and saving the index could take a few minutes or even hours, depending on the number of rows and HNSW index parameters.

To perform a vector search, just execute the same query again:

Result

The query latency decreased significantly because the nearest neighbours were retrieved using the vector index. Vector similarity search using a vector similarity index may return results that differ slightly from the brute-force search results. An HNSW index can potentially achieve a recall close to 1 (same accuracy as brute force search) with a careful selection of the HNSW parameters and evaluating the index quality.

Creating embeddings with UDFs

One usually wants to create embeddings for new images or new image captions and search for similar image / image caption pairs in the data. We can use UDF to create the target vector without leaving the client. It is important to use the same model to create the data and new embeddings for searches. The following scripts utilize the ViT-B/32 model which also underlies the dataset.

Text embeddings

First, store the following Python script in the user_scripts/ directory of your ClickHouse data path and make it executable (chmod +x encode_text.py).

encode_text.py:

Then create encode_text_function.xml in a location referenced by <user_defined_executable_functions_config>/path/to/*_function.xml</user_defined_executable_functions_config> in your ClickHouse server configuration file.

You can now simply use:

The first run will be slow because it loads the model, but repeated runs will be fast. We can then copy the output to SET param_target=... and can easily write queries. Alternatively, the encode_text() function can directly be used as a argument to the cosineDistance function :

Note that the encode_text() UDF itself could require a few seconds to compute and emit the embedding vector.

Image embeddings

Image embeddings can be created similarly and we provide a Python script that can generate an embedding of an image stored locally as a file.

encode_image.py

encode_image_function.xml

Fetch an example image to search :

Then run this query to generate the embedding for above image :

The complete search query is :