An online AI image search engine based on the Clip model and Qdrant vector database. Supports keyword search and similar image search.
- Use the Clip model to generate 768-dimensional vectors for each image as the basis for search. No need for manual annotation or classification, unlimited classification categories.
- OCR Text search is supported, use PaddleOCR to extract text from images and use BERT to generate text vectors for search.
- Use Qdrant vector database for efficient vector search.
The above screenshots may contain copyrighted images from different artists, please do not use them for other purposes.
| Hardware | Minimum | Recommended |
|---|---|---|
| CPU | X86_64 or ARM64 CPU, 2 cores or more | 4 cores or more |
| RAM | 4GB or more | 8GB or more |
| Storage | 10GB or more for libraries, models, and datas | 50GB or more, SSD is recommended |
| GPU | Not required | CUDA supported GPU for acceleration, 4GB of VRAM or more |
- For local deployment: Python 3.10 ~ Python 3.12, with uv package manager installed.
- For Docker deployment: Docker and Docker Compose (For CUDA users,
nvidia-container-runtimeis required) or equivalent container runtime.
In most cases, we recommend using the Qdrant database to store metadata. The Qdrant database provides efficient retrieval performance, flexible scalability, and better data security.
Please deploy the Qdrant database according to the Qdrant documentation. It is recommended to use Docker for deployment.
If you don't want to deploy Qdrant yourself, you can use the online service provided by Qdrant.
Local file storage directly stores image metadata (including feature vectors, etc.) in a local SQLite database. It is only recommended for small-scale deployments or development deployments.
Local file storage does not require an additional database deployment process, but has the following disadvantages:
- Local storage does not index and optimize vectors, so the time complexity of all searches is
O(n). Therefore, if the data scale is large, the performance of search and indexing will decrease. - Using local file storage will make NekoImageGallery stateful, so it will lose horizontal scalability.
- When you want to migrate to Qdrant database for storage, the indexed metadata may be difficult to migrate directly.
Note
This tutorial is for NekoImageGallery v1.4.0 and later, in which we switch to uv as package manager. If you are
using an earlier version, please refer to the README file in the corresponding version tag.
- Clone the project directory to your own PC or server, then checkout to a specific version tag (like
v1.4.0). - Install the required dependencies:
uv sync --no-dev --extra cpu # For CPU-only deployment uv sync --no-dev --extra cu124 # For CUDA v12.4 deployment uv sync --no-dev --extra cu118 # For CUDA v11.8 deployment
Note
- It's required to specify the
--extraoption to install the correct dependencies. If you don't specify the--extraoption, PyTorch and its related dependencies will not be installed. - If you want to use CUDA for accelerated inference, be sure to select the CUDA-enabled extra variant in this step
(we recommend
cu124unless your platform does not support cuda12+). After installation, you can usetorch.cuda.is_available()to confirm that CUDA is available. - If you are developing or testing, you can sync without the
--no-devswitch to install the dependencies required for development, testing, and code checking.
- Modify the configuration file in the
configdirectory as needed. You can directly modifydefault.env, but it is recommended to create a file namedlocal.envto override the configuration indefault.env. - (Optional) Enable the built-in frontend:
NekoImageGallery v1.5.0+ has a built-in frontend application based
on NekoImageGallery.App.
To enable it, set
APP_WITH_FRONTEND=Truein your configuration file.[!WARNING] After enabling the built-in frontend, all APIs will be automatically mounted under the
/apisub-path. For example, the original/docswill become/api/docs. This may affect your existing deployment, please proceed with caution. - Run the application:
You can specify the ip address to bind to with
uv run main.py
--host(default is 0.0.0.0) and the port to bind to with--port(default is 8000). You can view all available commands and options withuv run main.py --help. - (Optional) Deploy the frontend application: If you do not want to use the built-in frontend, or want to deploy the frontend independently, you can refer to the deployment documentation of NekoImageGallery.App.
NekoImageGallery's docker image are built and released on Docker Hub, including serval variants:
Where <version> is the version number or version alias of NekoImageGallery, as follows:
| Version | Description |
|---|---|
latest |
The latest stable version of NekoImageGallery |
v*.*.* / v*.* |
The specific version number (correspond to Git tags) |
edge |
The latest development version of NekoImageGallery, may contain unstable features and breaking changes |
In each image, we have bundled the necessary dependencies, openai/clip-vit-large-patch14 model
weights, bert-base-chinese model weights and easy-paddle-ocr models to provide a complete and ready-to-use image.
The images uses /opt/NekoImageGallery/static as volume to store image files, mount it to your own volume or directory
if local storage is required.
For configuration, we recommend using environment variables to override the default configuration. Secret information (such as API tokens) can be provided through docker secrets.
Note
To enable the built-in frontend, please set the environment variable APP_WITH_FRONTEND=True.
After enabling, all APIs will be automatically mounted under the /api sub-path, please ensure that your reverse
proxy and other configurations are correct.
If you want to support CUDA acceleration during inference, please refer to the Docker GPU related documentation for installation.
Related Document:
- Download the
docker-compose.ymlfile from repository.# For cuda deployment (default) wget https://raw.githubusercontent.com/hv0905/NekoImageGallery/master/docker-compose.yml # For CPU-only deployment wget https://raw.githubusercontent.com/hv0905/NekoImageGallery/master/docker-compose-cpu.yml && mv docker-compose-cpu.yml docker-compose.yml
- Modify the docker-compose.yml file as needed
- Run the following command to start the server:
# start in foreground docker compose up # start in background(detached mode) docker compose up -d
There are several ways to upload images to NekoImageGallery:
- Via web interface: You can use the built-in web interface or the standalone NekoImageGallery.App to upload images to the server. Please make sure you have enabled the Admin API and set your Admin Token in the configuration file.
- Via local indexing: This is suitable for local deployment or when the images you want to upload are already on the
server.
Use the following command to index your local image directory:
The above command will recursively upload all images in the specified directory and its subdirectories to the server. You can also specify categories/starred for images you upload, see
python main.py local-index <path-to-your-image-directory>
python main.py local-index --helpfor more information. - Via API: You can use the upload API provided by NekoImageGallery to upload images. By using this method, the
server can prevent saving the image files locally but only store their URLs and metadata.
Please make sure you have enabled the Admin API and set your Admin Token in the configuration file. This method is suitable for automated image uploading or synchronizing NekoImageGallery with external systems. For more information, please check the API documentation.
The API documentation is provided by the built-in Swagger UI of FastAPI. You can view the API documentation by
accessing the /docs or /redoc path of the server.
Note
If you enable the built-in frontend, the path to the API documentation will become /api/docs and /api/redoc.
For a more detailed Wiki of the project, including how the project works, you can visit the Wiki generated by DeepWiki: NekoImageGallery DeepWiki.
(The wiki is generated automatically and is not fully reviewed by the project team, so read with caution.)
Those project works with NekoImageGallery :D
There are many ways to contribute to the project: logging bugs, submitting pull requests, reporting issues, and creating suggestions.
Even if you with push access on the repository, you should create a personal feature branches when you need them. This keeps the main repository clean and your workflow cruft out of sight.
We're also interested in your feedback on the future of this project. You can submit a suggestion or feature request through the issue tracker. To make this process more effective, we're asking that these include more information to help define them more clearly.
Copyright 2025 EdgeNeko
Licensed under AGPLv3 license.





