[VSAONLINE] The last talk of the winter series today at 20:00 GMT

3 views
Skip to first unread message

Evgeny Osipov

unread,
Mar 29, 2021, 2:50:55 PM3/29/21
to vsacom...@googlegroups.com
Dear all,


The last talk of the winter series will be today at 20:00 GMT. Please observe eventually summer time shifts and calculate proper offsets.

The talk will be given by Hedda R. Schmidtke

Boundary and Normal States of Consciousness

The link to the webinar is https://ltu-se.zoom.us/j/65564790287

Regards,
Evgeny



--
Professor Evgeny Osipov

Dependable Communication and Computation Systems

Department of Computer Science
Electrical and Space Engineering
Luleå University of Technology

Hedda Schmidtke

unread,
Apr 3, 2021, 1:22:40 PM4/3/21
to 'Evgeny Osipov' via VSACommunity
Dear Evgeny,

Thanks for uploading. Let me loop in the mailing list.

Regarding the question of the all-one vector to start a cycle: this is not necessary in practice. It makes for a mathematically more concise description (and also has philosophical advantages) to subtract from 1, but one can simply start from the negation (not) of the first clause vector i the implementation, i.e., from !v0. That is the same as starting from 1-v0.

On the more general note of controling density, I think, the brain has several subsystems. We should make sure that they can interface, and the interface of my system is independent from its internal ebb and flow of density. Its input is a text (sum/permutation encoded lists of words), its output is a sequence of images (e.g., on a grid cell structure) and/or some actuator activations. Both output types are not problematic. More generally, VSAs face resource limitations in one way or another (as does any other physical computer system). In the case of your system, does it get a text/logical statement and an image? What does it output? A classical query response?

In practical experiments with realistic text (and ~8000 bits), I have not run into any major issues, yet. When I have ~100 vectors to handle within one and the same cycle (i.e., in an artificially constructed text), of course, I get some artefacts. But between segments of natural text, the system is always reset by the text itself, that is: normal natural language comes with inbuilt density control. One can see that in the demo, as well. This is, of course, not surprising: natural text is produced by humans, who face the same issues in producing the text, and also leverage the Gricean maximes to produce text that is easier to segment into world model pieces.

I also have a paper on “logical rotation” that details one distinct (the most complicated) step of processing world model pieces so they can be “glued” together, e.g., on a grid cell representation. This explains, for instance, how we can generate mental models in survey perspective of a space from route instructions. In summary, I have really stress-tested the system, and it yet has to fail in terms of cognitive adequacy.

Best regards,
Hedda

> Am 03.04.2021 um 9:48 AM schrieb Evgeny Osipov <evgeny...@ltu.se>:
>
> Dear Hedda,
>
> Thank you for the slides. I just uploaded the video with your talk and linked from the website. I also linked your presented slides.
>
>
> One thing I wanted to write to you about after your talk was about the nature of my and (I presume Fritz’s) question, which would be interesting to discuss at a future occasion.
>
> I am now working with implementation of VSA functionality on spiking hardware. There the main design objective (which is also supported by neurophysiological evidences) is the sparse activation of neurons. So in spiking hardware all representations and the result of operation on them is sparse with approximately same density of ones. In dense VSA (with bitwise thresholded summation for bundling and XOR for binding) the density is also constant all the way.
>
> I think I have much better understanding of your approach: the usage of distributed representations together with pure logical operations for reasoning. The question remains how this approach could be realised with constant (or controllable?) density all the way (say dense representations to begin with) ? Is it possible to avoid having a vector of all 1s for computation of the world model - the spiking implementation definitely wants to avoid that? From my (implementational perspective) it is totally valid to use L1 norm for comparing the outcome of the predicate computation, also it is valid to use both density expanding (OR) as well as density collapsing (AND) operations, I, however, feel that there is a need for additional density controlling functionality… Any thoughts on this passage?
>
> Cheers
>
> Evgeny
>
>
>
>
>
>
> --
> Professor Evgeny Osipov
>
> Dependable Communication and Computation Systems
>
> Department of Computer Science
> Electrical and Space Engineering
> Luleå University of Technology
>
>> On 2 Apr 2021, at 18:54, Hedda Schmidtke <hedda.s...@gmail.com> wrote:
>>
>> Dear Evgeny,
>>
>> I am attaching the slides. I made the video demo available online and updated the corresponding slide. Sorry, it took so long.
>>
>> Best regards,
>> Hedda
>>> --
>>> You received this message because you are subscribed to the Google Groups "VSACommunity" group.
>>> To unsubscribe from this group and stop receiving emails from it, send an email to vsacommunity...@googlegroups.com.
>>> To view this discussion on the web visit https://groups.google.com/d/msgid/vsacommunity/F66703BB-36C2-477C-BFA2-CB2F08F1844F%40ltu.se.
>>
>> <slides.pdf>
>

Reply all
Reply to author
Forward
0 new messages