Skip to content

Run OpenAI's CLIP model on iPhone to search photos.

License

Notifications You must be signed in to change notification settings

yuedy/Queryable

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

34 Commits
 
 
 
 
 
 
 
 

Repository files navigation

Queryable

download-on-the-app-store

Queryable

The open-source code of Queryable, an iOS app, leverages the OpenAI's CLIP model to conduct offline searches in the 'Photos' album. Unlike the category-based search model built into the iOS Photos app, Queryable allows you to use natural language statements, such as a brown dog sitting on a bench, to search your album. Since it's offline, your album privacy won't be compromised by any company, including Apple or Google.

Blog | Website | App Store

How does it work?

  • Encode all album photos using the CLIP Image Encoder, compute image vectors, and save them.
  • For each new text query, compute the corresponding text vector using the Text Encoder.
  • Compare the similarity between this text vector and each image vector.
  • Rank and return the top K most similar results.

The process is as follows:

For more details, please refer to my blog: Run CLIP on iPhone to Search Photos.

Run on Xcode

Download the ImageEncoder_float32.mlmodelc and TextEncoder_float32.mlmodelc from Google Drive. Clone this repo, put the downloaded models below CoreMLModels/ path and run Xcode, it should work.

Core ML Export

If you only want to run Queryable, you can skip this step and directly use the exported model from Google Drive. If you wish to implement Queryable that supports your own native language, or do some model quantization/acceleration work, here are some guidelines.

The trick is to separate the TextEncoder and ImageEncoder at the architecture level, and then load the model weights individually. Queryable uses the OpenAI ViT-B/32 model, and I wrote a Jupyter notebook to demonstrate how to separate, load, and export the Core ML model. The export results of the ImageEncoder's Core ML have a certain level of precision error, and more appropriate normalization parameters may be needed.

Contributions

Disclaimer: I am not a professional iOS engineer, please forgive my poor Swift code. You may focus only on the loading, computation, storage, and sorting of the model.

You can apply Queryable to your own product, but I don't recommend simply modifying the appearance and listing it on the App Store. If you are interested in optimizing certain aspects(such as mazzzystar#3, mazzzystar#4, mazzzystar#5, mazzzystar#6), feel free to submit a PR (Pull Request).

If you have any questions/suggestions, here are some contact methods: Discord | Twitter | Reddit: r/Queryable.

License

MIT License

Copyright (c) 2023 Ke Fang

About

Run OpenAI's CLIP model on iPhone to search photos.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Swift 84.0%
  • Jupyter Notebook 16.0%