Dear all,
This is a reminder about the talk of Marco Bronzini from University of Trento, on VSAONLINE in less than one hour!
See you soon
Evgeny
”Hyperdimensional Probe: Decoding LLM Representations via Vector Symbolic Architectures”
Date: April 13, 2026
Time: 20:00 GMT
Zoom: https://ltu-se.zoom.us/j/65564790287
Abstract: Despite their capabilities, Large Language Models (LLMs) remain opaque with limited understanding of their internal representations. Current interpretability methods either focus on input-oriented feature extraction, such as supervised probes and Sparse Autoencoders (SAEs), or on output distribution inspection, such as logit-oriented approaches. A full understanding of LLM vector spaces, however, requires integrating both perspectives, something existing approaches struggle with due to constraints on latent feature definitions. We introduce the Hyperdimensional Probe, a hybrid supervised probe that combines symbolic representations with neuralprobing. Leveraging Vector Symbolic Architectures (VSAs) and hypervector algebra, it unifies prior methods: the top-down interpretability of supervised probes, SAE’s sparsity-driven proxy space, and output-oriented logit investigation. This allows deeper input-focused feature extraction while supporting output-oriented investigation. Our experiments show that our method consistently extracts meaningful concepts across LLMs, embedding sizes, and setups; uncovering concept-driven patterns in analogy-oriented inference and QA-focused text generation. By supporting joint input–output analysis, this work1 advances semantic understanding of neural representations while unifying the complementary perspectives of prior methods.