Announcing Pixtral 12B - the first-ever multimodal Mistral model

1 view
Skip to first unread message

Alan Timm

unread,
Sep 18, 2024, 6:51:43 PM9/18/24
to RSSC-List
Impressive performance from a 12B vision model.  Can be run on gpu, and if it's not already I expect to see a 4bit quantized version out there that runs on 8gb of memory.

https://mistral.ai/news/pixtral-12b/

Pixtral 12B in short:
  • Natively multimodal, trained with interleaved image and text data
  • Strong performance on multimodal tasks, excels in instruction following
  • Maintains state-of-the-art performance on text-only benchmarks
  • Architecture:
    • New 400M parameter vision encoder trained from scratch
    • 12B parameter multimodal decoder based on Mistral Nemo
    • Supports variable image sizes and aspect ratios
    • Supports multiple images in the long context window of 128k tokens
  • Use:


Pixtral is trained to understand both natural images and documents, achieving 52.5% on the MMMU reasoning benchmark, surpassing a number of larger models. The model shows strong abilities in tasks such as chart and figure understanding, document question answering, multimodal reasoning and instruction following. Pixtral is able to ingest images at their natural resolution and aspect ratio, giving the user flexibility on the number of tokens used to process an image. Pixtral is also able to process any number of images in its long context window of 128K tokens. Unlike previous open-source models, Pixtral does not compromise on text benchmark performance to excel in multimodal tasks.


Reply all
Reply to author
Forward
0 new messages