We’re introducing a neural network called CLIP which efficiently learns visual concepts from natural language supervision. CLIP can be applied to any visual classification benchmark...
Next to each structure, the colored bar represents the location of the different domains in the corresponding sequences. All pictures were generated in UCSF Chime...