LLM (Large Language Model)
Description: Large Language Models are trained on vast amounts of text data and perform natural language processing (NLP) tasks. An example is the GPT (Generative Pre-trained Transformer) series.
Uses: Text generation, summarization, question answering, translation, etc.
VLM (Vision-Language Model)
Description: Models that handle both visual and textual information, processing text related to images and videos. For example, they generate image captions or perform visual question answering (VQA).
Uses: Image captioning, image search, visual question answering, etc.
LVM (Latent Variable Model)
Description: Latent Variable Models assume latent variables behind observed data and use them to model the data. Typical examples include Gaussian Mixture Models (GMM) and Variational Autoencoders (VAE).
Uses: Data clustering, generative models, anomaly detection, etc.
LMM (Linear Mixed Model)
Description: Linear Mixed Models include both fixed effects and random effects, applied to hierarchical structures and correlated data.
Uses: Data analysis in biostatistics, economics, psychology, etc.
MLLM (Multilingual Language Model)
Description: Multilingual Language Models are trained in multiple languages and perform tasks such as translation and NLP across different languages.
Uses: Multilingual translation, multilingual question answering, multilingual text generation, etc.
Generative AI
Description: Generative AI refers to AI technologies that generate new data, including images, text, speech, and video. This includes techniques like GANs (Generative Adversarial Networks) and VAEs.
Uses: Image generation, text generation, speech synthesis, data augmentation, etc.
Foundation Model
Description: Foundation Models are large-scale, pre-trained models that can be adapted to a wide range of tasks. They serve as a base for various downstream tasks.
Uses: Diverse NLP tasks, visual recognition, generative tasks, etc.
These terms may overlap in usage, but each refers to specific technologies or applications, so understanding them in context is important.
The Structural Causal Model (SCM)"At the center of the structural theory of causation lies a “structural model,” M , consisting of two sets of variables, U and V , and a set F of functions that determine or simulate how values are assigned to each variable Vi ∈ V. …"Source: (2015) Elias Bareinboim and Judea Pearl, Causal inference from big data: Theoretical foundations and the data-fusion problem https://ftp.cs.ucla.edu/pub/stat_ser/r450.pdf#page=2
--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the forum, see http://ontologforum.org/info
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/ontolog-forum/4be288521a454f1ebb2fb874f99ed80e%406adff5e870954f8aa2c996bfe5baf406.
On Mar 25, 2026, at 3:41 PM, Alastair Paton <alastai...@patonproject.peopleproject.org.au> wrote:
[CAUTION: Non-UBC Email]
To view this discussion visit https://groups.google.com/d/msgid/ontolog-forum/C2D114FA-43AD-40DE-8D95-47B593A5B42D%40patonproject.peopleproject.org.au.