LLMs, VLM, and other acronyms

11 views
Skip to first unread message

John F Sowa

unread,
Mar 25, 2026, 1:56:25 PM (7 days ago) Mar 25
to ontolog-forum, CG
Today's talk for the Ontology Summit mentioned VLMs and LLMs.  There are many more related acronyms. Following is a list.  Source:  https://www.hachi-x.com/en/single-post/differences-between-llm-vlm-lvm-lmm-mllm-generative-ai-and-foundation-models 

John
________________________________


LLM (Large Language Model)


  • Description: Large Language Models are trained on vast amounts of text data and perform natural language processing (NLP) tasks. An example is the GPT (Generative Pre-trained Transformer) series.

  • Uses: Text generation, summarization, question answering, translation, etc.


VLM (Vision-Language Model)


  • Description: Models that handle both visual and textual information, processing text related to images and videos. For example, they generate image captions or perform visual question answering (VQA).

  • Uses: Image captioning, image search, visual question answering, etc.


LVM (Latent Variable Model)


  • Description: Latent Variable Models assume latent variables behind observed data and use them to model the data. Typical examples include Gaussian Mixture Models (GMM) and Variational Autoencoders (VAE).

  • Uses: Data clustering, generative models, anomaly detection, etc.


LMM (Linear Mixed Model)


  • Description: Linear Mixed Models include both fixed effects and random effects, applied to hierarchical structures and correlated data.

  • Uses: Data analysis in biostatistics, economics, psychology, etc.


MLLM (Multilingual Language Model)


  • Description: Multilingual Language Models are trained in multiple languages and perform tasks such as translation and NLP across different languages.

  • Uses: Multilingual translation, multilingual question answering, multilingual text generation, etc.


Generative AI


  • Description: Generative AI refers to AI technologies that generate new data, including images, text, speech, and video. This includes techniques like GANs (Generative Adversarial Networks) and VAEs.

  • Uses: Image generation, text generation, speech synthesis, data augmentation, etc.


Foundation Model


  • Description: Foundation Models are large-scale, pre-trained models that can be adapted to a wide range of tasks. They serve as a base for various downstream tasks.

  • Uses: Diverse NLP tasks, visual recognition, generative tasks, etc.


These terms may overlap in usage, but each refers to specific technologies or applications, so understanding them in context is important.


Alastair Paton

unread,
Mar 25, 2026, 6:42:16 PM (7 days ago) Mar 25
to ontolo...@googlegroups.com, CG
Hi John,

Thank you for the below pointer to a list of acronyms where many end with M for Model.
 
Q. Have you (or other Ontolog Forum members) any opinions on:
  a. why the Structural Causal Model (SCM) doesn’t usually appear in such lists, and
  b. if it should?

The Structural Causal Model (SCM)
"At the center of the structural theory of causation lies a “structural model,” M , consisting of two sets of variables, U and V , and a set F of functions that determine or simulate how values are assigned to each variable ViV. …"
Source: (2015) Elias Bareinboim and Judea Pearl, Causal inference from big data: Theoretical foundations and the data-fusion problem https://ftp.cs.ucla.edu/pub/stat_ser/r450.pdf#page=2

My key interest is using ontologies to define said variables.

Regards, AL.
Alastair Paton

--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the forum, see http://ontologforum.org/info
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/ontolog-forum/4be288521a454f1ebb2fb874f99ed80e%406adff5e870954f8aa2c996bfe5baf406.

John F Sowa

unread,
Mar 25, 2026, 7:05:22 PM (7 days ago) Mar 25
to ontolo...@googlegroups.com, CG
Alastair,

I just copied a list of acronyms that are currently popular in AI.   I've known Judea Perl for years, but i
I haven't read every paper he wrote, and I admit that I did not remember his acroynm SCM.  I just searched for SCM.  Google AI produced

Supply Chain Management (SCM) is the centralized management of the entire flow of goods, services, and information—from sourcing raw materials to final delivery to the customer. Its goal is to maximize efficiency, reduce costs, and build a competitive advantage by synchronizing supply with demand, leveraging technology for real-time visibility, and improving customer satisfaction.

Another answer:

The sternocleidomastoid (SCM) is a large, superficial paired muscle located on either side of the neck, running from the base of the skull behind the ear to the collarbone and sternum. It is a primary muscle for flexing the neck, rotating the head to the opposite side, and tilting the head.


 


David Poole

unread,
Mar 25, 2026, 7:44:47 PM (7 days ago) Mar 25
to ontolo...@googlegroups.com, CG
In our AI textbook (Cambridge University Press, 2023; full text at artint.info ) we have a whole chapter on causal models (https://artint.info/3e/html/ArtInt3e.Ch11.html) as well as a chapter on deep learning (as well as 4 other chapters on machine learning) and a chapter on knowledge graphs and ontology. 

I’m always surprised how little attention there is to causality. It is impossible learn how the world works by observation alone. Consider observing a doctor sending some people to hospital and sending some home, and observing death rates. Unless you know the underlying condition of why the doctor sent the patient to hospital (the confounders), you can’t learn the effect of going to hospital. This is why drug companies spend billions on randomized control trials rather than just collecting observational data.

Ignoring missing information gives misleading results. One reason why drug trials are so expensive is that they need to keep track of why patients dropped out. At our AAAI invited talk, I asked the audience who took missing data into account. No one put up their hand. (see https://underline.io/events/501/sessions/22053/lecture/146031-the-essence-of-intelligence-is-appropriate-action-not-thinking-reasoning-learning-or-language-and-other-things-every-student-of-ai-should-know)

These inconvenient truths are not a good way to make money when they don’t fit in with the hammer you are trying to sell.

David



On Mar 25, 2026, at 3:41 PM, Alastair Paton <alastai...@patonproject.peopleproject.org.au> wrote:

[CAUTION: Non-UBC Email]
Reply all
Reply to author
Forward
0 new messages