AboutWriting

Fd

  • Play the web demo in your browser
  • macOS app demo:

Find Images with Keywords or descriptions

Framer's Component Picker


Deck

Framer's Component Picker


Framer's Component Picker


Framer's Component Picker


Framer's Component Picker


Framer's Component Picker


Framer's Component Picker


Framer's Component Picker


Framer's Component Picker


Framer's Component Picker


Framer's Component Picker


Framer's Component Picker


Framer's Component Picker


Framer's Component Picker


Framer's Component Picker


Framer's Component Picker


Framer's Component Picker


Framer's Component Picker


Framer's Component Picker


OpenAI-CLIP-Powered

CLIP is among one of the most efficient models with an accuracy of 41% at 400 million images outperforming other models such as the Bag of Words Prediction (27%) and the Transformer Language Model (16%) at the same number of images. This means that CLIP trains much faster than other models within the same domain.

The versatility of CLIP is quite amazing. It has been trained with such a wide array of image styles that it is far more flexible and than other models like ImageNet. It is important to note that CLIP generalizes well with images that it was trained on, not images outside of its training domain.

Output:

Framer's Component Picker

Framer's Component Picker