aHuman Vision Research

13 views
Skip to first unread message

Phoenix

unread,
Sep 22, 2010, 4:07:41 PM9/22/10
to Discuss a Human Project
Refer to the wiki page under research to start the discussion.

I am currently looking into the edge detection algorithms to pre
process the image before it is fed to primary visual cortex.

Let me know if you have any other ideas or suggestions.

Thanks

Vladimir Savchik

unread,
Sep 23, 2010, 3:05:55 PM9/23/10
to ahuman-...@googlegroups.com
As per my understanding visual perception is very complex area with applications in robotics and non-AL fields.
I think that many useful insights can be derived from examining visual cortex, but no point to have visual sensor for autonomous program, hosted on local PC or internet server.

I would like if we generalise visual cortex logic to use in other types of sensors that we can have - symbol chat, process metrics, free-style internet pages.
Maybe with output from speech recognition engine, built in Windows.

Any case - my advice:
1. understand area to some extent
2. start page, try explain your plan
3. remove any words that give no value there
4. read again, find most promising direction of of further thinking.
5. try code

Major cortex question for me now - if we just convert all server picture into one number where some entity is coded - what we will do with this number in neural mind?
Simple military answer is - if enemy is recognised - kill him.
It is not the case for us - no value at all in one number - it is very rough model of the world to experience emotions or to have complex multi-level behaviour.

I need an answer to the question - what is the actual output of the human visual cortex?

2010/9/23 Phoenix <sarbjit...@gmail.com>

--
You received this message because you are subscribed to the Google
Groups "Discuss a Human Project" group.
To post to this group, send email to ahuman-...@googlegroups.com

Phoenix

unread,
Sep 24, 2010, 4:11:36 PM9/24/10
to Discuss a Human Project
Visual perception will still be required even in case aHuman will not
use any visual sensors but it can derive some decisions as well as
knowledge researching on the internet and looking at various pictures
on internet to infer some knowledge out of it.

Also later we can intergate it with webcam so that it can interact
with the user if possible. All this is just the plan feasibility of
which is still not known.

If we have visual perception model brain will get the output of this
perception and we can derive the emotions of aHuman depending upon the
output of visual cortex.

e.g I want aHuman to daily look at weather.com and tell me how it will
be today, visual perception can help aHuman to analyze some pictures
posted on website to derive the knowledge and report me the same.

Let me know if I am thinking in right direction.

-Sarbjit
> 2010/9/23 Phoenix <sarbjit.lon...@gmail.com>

Vladimir Savchik

unread,
Sep 25, 2010, 3:23:38 AM9/25/10
to ahuman-...@googlegroups.com
Actually I don't know whether it is right direction :).
If you succeed then - yes, if not - then no.

I strong agree that researching visual perception can lead to good design decisions (if you avoid going deep down into it and forget about original targets).
For visual perception itself:
- it is the mostly well examined area of human perception
- it is the area where the volume of transferred information and consumed CPU resources will be high, maybe requiring dedicated hardware to work on

For emotions - let me express what I'm sure now:
- aHuman will be alive if conscious, feeling and acting
- all targets except feeling are now solved to some extent in the world (one more, one less) - making aHuman feeling primary objective and challenge of the project
- perception itself is related to consciousness, not to feeling - that's why we can use mathematical belief networks, not neural networks there, not impacting artificial life project goal
- emotions are the way to communicate feelings - i.e. to express feelings by exposed properties or to recognise counterparty's feeling by his exposed emotions
- perception solves recognition tasks - e.g. to read counterparty emotions
- feeling arises as emergent feature from embodiment (it is still possible for pure software!) and from plan/fact comparison in behavior component

2010/9/25 Phoenix <sarbjit...@gmail.com>

Phoenix

unread,
Sep 27, 2010, 4:31:45 PM9/27/10
to Discuss a Human Project
Frankly I am still struggling to find what is the intention and also I
am also facing some problem in compiling as well as running the
checked in version of the code. Kindly make sure the code is in sync.
Even if you are using some stubs for running kindly check in the code
with comments in code so that we can isolate the temporary code later
on. Code in code base should run without any problem and compilation
errors.

I need more information as to where we are now and what is coming. I
will appreciate if you can outline deliverable of project somewhat
more in detail.

For every module if you can highlight the input to module and the
intended output, it will help me to plan my work as well as align my
vision with that of yours.

Looking forward for more information from your side.

Thanks

On Sep 25, 3:23 am, Vladimir Savchik <vsavc...@gmail.com> wrote:
> Actually I don't know whether it is right direction :).
> If you succeed then - yes, if not - then no.
>
> I strong agree that researching visual perception can lead to good design
> decisions (if you avoid going deep down into it and forget about original
> targets).
> For visual perception itself:
> - it is the mostly well examined area of human perception
> - it is the area where the volume of transferred information and consumed
> CPU resources will be high, maybe requiring dedicated hardware to work on
>
> For emotions - let me express what I'm sure now:
> - aHuman will be alive if conscious, feeling and acting
> - all targets except feeling are now solved to some extent in the world (one
> more, one less) - making aHuman feeling primary objective and challenge of
> the project
> - perception itself is related to consciousness, not to feeling - that's why
> we can use mathematical belief networks, not neural networks there, not
> impacting artificial life project goal
> - emotions are the way to communicate feelings - i.e. to express feelings by
> exposed properties or to recognise counterparty's feeling by his exposed
> emotions
> - perception solves recognition tasks - e.g. to read counterparty emotions
> - feeling arises as emergent feature from embodiment (it is still possible
> for pure software!) and from plan/fact comparison in behavior component
>
> 2010/9/25 Phoenix <sarbjit.lon...@gmail.com>

Vladimir Savchik

unread,
Sep 30, 2010, 4:19:44 PM9/30/10
to ahuman-...@googlegroups.com
Fixed - build and run ok:

All commited except tmp files:

C:\projects\ai2svn\human>svn status
?       aiengine\log\main.log
?       aiengine\lib\debug\generic.vc.debug.lib
?       aiengine\src\libbn\sf_neocortex_1_4
?       aiengine\src\libbn\sf_neocortex_1_4_2\dist
?       aiapi\build
?       aiapi\bin
?       aiapi\aiapi.mk
?       workspace.human\workspace.human.tags
?       workspace.human\workspace.human.batch_build
?       workspace.human\workspace.human.workspace.session
?       workspace.human\workspace.human_wsp.mk
?       workspace.human\cscope_file.list
?       workspace.human\generic.vc\Debug
?       workspace.human\generic.vc\generic.vc.vcxproj.user
?       workspace.human\aiconsole.vc\Debug
?       workspace.human\aiconsole.vc\aiconsole.vc.vcxproj.user
?       workspace.human\aiengine.vc\Debug
?       workspace.human\aiengine.vc\aiengine.vc.vcxproj.user
?       workspace.human\human.solution.vc\ipch
?       workspace.human\human.solution.vc\human.vc.suo
?       workspace.human\human.solution.vc\Debug
?       workspace.human\human.solution.vc\human.vc.sdf
?       workspace.human\human.solution.vc\human.vc.opensdf
?       workspace.human\aiapi.vc\aiapi.vc.vcxproj.user
?       workspace.human\aiapi.vc\Debug
?       generic\build
?       generic\generic.mk
?       generic\bin
?       aiconsole\build
?       aiconsole\aiconsole.mk
?       aiconsole\bin
?       aiconsole\lib\debug\aiapi.vc.debug.lib
?       aihtmview\aihtmview.h.gch
?       aihtmview\build
?       aihtmview\aihtmview.project
?       aihtmview\aihtmview.mk
?       aihtmview\main.cpp
?       aihtmview\bin
?       workspace.tools\workspace.tools.tags
?       workspace.tools\workspace.tools.workspace.session
?       workspace.tools\workspace.tools_wsp.mk

---

Agree about code stability for commited codebase - surely it will be better with increased concurrent use of repository.

I think http://code.google.com/p/ahuman/wiki/ProjectPlanning basically shows the path (maybe we can speak about "commercial" aCat/aDog as nearest goal instead of/before aChild).
My intention is to create certain skeleton first (aMatter) where each architecture component exists and works to some extent while overall building is not just dirty trial code but live application.

Below tasks are required for aMatter completion as per current understanding:
- understand functioning of neural sensors and nature of neural conrol for sensors - working model is ready - see filesyswalker, controls - TBD
- understand neural signal transmission between neural networks and areas - sort of Hebbs learning - e.g. between sensor and cognition
- have working sequence machine, converting variable-length sequences of sensor data into perception data
- understand what is perception data, what is internal representaion of perceived data
- understand role and functions of associative memory, hippotalamus and enthorinal cortex
- understand multi-level behaviour and create continuous planning/coordination process
- create motor cortex and make it learning for complex actions
- implement feeling as software embodiment and connect feeling to behavior

2010/9/28 Phoenix <sarbjit...@gmail.com>

Phoenix

unread,
Oct 1, 2010, 2:24:22 PM10/1/10
to Discuss a Human Project
Thanks for the more detailed information. For now I am thinking of
porting the Thinker.cpp from SF Neocortex project to aHuman. Let me
know if I should do that.

It is the interface of libbn to the external world.
> I thinkhttp://code.google.com/p/ahuman/wiki/ProjectPlanningbasically shows
> the path (maybe we can speak about "commercial" aCat/aDog as nearest goal
> instead of/before aChild).
> My intention is to create certain skeleton first (aMatter) where each
> architecture component exists and works to some extent while overall
> building is not just dirty trial code but live application.
>
> Below tasks are required for aMatter completion as per current
> understanding:
> - understand functioning of neural sensors and nature of neural conrol for
> sensors - working model is ready - see filesyswalker, controls - TBD
> - understand neural signal transmission between neural networks and areas -
> sort of Hebbs learning - e.g. between sensor and cognition
> - have working sequence machine, converting variable-length sequences of
> sensor data into perception data
> - understand what is perception data, what is internal representaion of
> perceived data
> - understand role and functions of associative memory, hippotalamus and
> enthorinal cortex
> - understand multi-level behaviour and create continuous
> planning/coordination process
> - create motor cortex and make it learning for complex actions
> - implement feeling as software embodiment and connect feeling to behavior
>
> 2010/9/28 Phoenix <sarbjit.lon...@gmail.com>

Vladimir Savchik

unread,
Oct 1, 2010, 3:32:29 PM10/1/10
to ahuman-...@googlegroups.com
I guess it is too hard for you to drive big research area so I would suggest to collaborate for small tasks under my attention.

If you agree, please find below:

Thinker is implementation of visual sensor in specific GUI environment. Also, it is example how to use library.
I want to connect existing filesyswalker sensor to perception in such a way that we treat it as a generic model for all sensors.
Even with ready-to-use library it produces challenges:
- filesyswalker produces fixed-width variable-length neural sequenses of encoded ([-1,1] for symbol) filename paths and actions performed; sf_neocortex works only with fixed-length
- after first message received by neocortex area from given sensor, neocortex should create reasonable neocortex network, cortex, cognitive processor, derived only from sensor cortex properties
- cortexes do have 3D-space location (parallelogramm) and orientation, predefined interface - cognitive processor should implement the same
- sf_neocortex is trained using teacher - showing both inputs and labels - we need learning w/o teacher
- sf_neocortex has output of predefined number of most possible causes with their probability; we do need more rich outputs

I would like to have this in the nearest time - please commit once per day, I will review and response / amend if required.

2010/10/1 Phoenix <sarbjit...@gmail.com>

Habbit

unread,
Oct 1, 2010, 3:40:28 PM10/1/10
to Discuss a Human Project
1. Now after app start filesyswalker sensor starts to work.
It sends first sensor packet, which leads to execution of
CognitiveProcessor::createCortexProcessor( Cortex *inputs ).
Create proper cognitive processor in this function,

2, Creation should use sf_neocortex via facade - to be created.

Phoenix

unread,
Oct 4, 2010, 9:21:25 AM10/4/10
to Discuss a Human Project
I think you are right I should work under your attention as its hard
for me to work on big research area. I will work on the area as
directed by you and will talk to you if I need some clarification.

Thanks

Phoenix

unread,
Oct 11, 2010, 8:35:36 AM10/11/10
to Discuss a Human Project
Hi,

Can you please update me on the recent changes in the project. I saw
you updated new custom neocortex library and there were some changes
to the code that I commited. Kindly update your reviews in the
discussions so that I am clear about your direction.

Thanks

Vladimir Savchik

unread,
Oct 11, 2010, 12:03:55 PM10/11/10
to ahuman-...@googlegroups.com
I'm trying to find the way how to pass sensor data to HTM in real environment.
Also still exists the question what is next - what are outputs of neocortex and where they should go.

I found that my and your changes were merged - I tried to save yours but not sure that successfully.
Finally I decided to create another library - neocortex_custom - so you can do you want with sf_neocortex_1_4_2.

That's why if you need any changes in sf_neocortex_1_4_2, pls do it - or rollback to previously commited version.
I will use neocortex_custom for now - please don't commit to its code.

2010/10/11 Phoenix <sarbjit...@gmail.com>

Phoenix

unread,
Oct 14, 2010, 5:11:30 PM10/14/10
to Discuss a Human Project
I am currently researching into the way we will feed data to the
neocortex and for the visual data I was thinking of integrating the
google APIs so that if aHuman wants to learn about anything he can
just do the query for the images of the things and feed the data to
the neocortex network to learn about the physical form of the object.

All the google APis are in java or .NET so now the question is whether
we should write wrapper in c++ for that or should we write code in
java (which I can) and feed the data using sockets to the cortex
network. Let me know what you think
abt this.



On Oct 11, 12:03 pm, Vladimir Savchik <vsavc...@gmail.com> wrote:
> I'm trying to find the way how to pass sensor data to HTM in real
> environment.
> Also still exists the question what is next - what are outputs of neocortex
> and where they should go.
>
> I found that my and your changes were merged - I tried to save yours but not
> sure that successfully.
> Finally I decided to create another library - neocortex_custom - so you can
> do you want with sf_neocortex_1_4_2.
>
> That's why if you need any changes in sf_neocortex_1_4_2, pls do it - or
> rollback to previously commited version.
> I will use neocortex_custom for now - please don't commit to its code.
>
> 2010/10/11 Phoenix <sarbjit.lon...@gmail.com>

Vladimir Savchik

unread,
Oct 15, 2010, 3:04:25 PM10/15/10
to ahuman-...@googlegroups.com
Create new sensor - like filesyswalker.
Then choose whatever way to implement capture of inputs and reaction to controls from mind.
In case of .NET API it is very like filsyswalker - but with different external API.

I slightly prefer external process feeding to socket - but it will maybe hard to debug if not finalised quickly. It can be supported by idea some project targets (aXXX - aCat, aWee or e.g. aGod) can for some reason do not require such sensor. This case heavy library and dependencies can prevent from optimal building (subj for smart build though).
This case - sensor will subscribe to channel where existing sockets will publish automatically. Still, socket layer can need adjustment to allow processing high-volume binary inputs. Type of input should be configured.
The only thing - input of the sensor can be any type, while output should be neural one.

Approach - sensor is special type of cortex - neural network with inputs and outputs.
But inputs are not converted to outputs using specific internal structure - as in any other cortex.
Still primary flow (async) is: inputs (sensor control) -> sensor internals -> external world -> sensor internals -> sensor outputs (primary sensory data).
Secondary flow (sync): inputs (sensor control) -> sensor internals -> input feedbacks.

Also, note that any cortex has physical 3D size, location and orientation.

2010/10/15 Phoenix <sarbjit...@gmail.com>

Phoenix

unread,
Oct 29, 2010, 2:28:34 PM10/29/10
to Discuss a Human Project
I have started a new wiki page where anyone can put the explanation of
the module he implement and the way to test that module. Kindly use it
in future when you write any code so that other person know what all
functionality is already there and the way to use that.

Kindly review the code for ImageKnowledgeBase and let me know in case
of any changes required.

Thanks
> 2010/10/15 Phoenix <sarbjit.lon...@gmail.com>

Habbit

unread,
Oct 30, 2010, 1:08:48 PM10/30/10
to Discuss a Human Project
1. Detailed Design was intended for the same purpose as Code
Explanations you created. Pls. merge pages into one no matter how it
is named.
2. When start new page, pls add breadscrumb trail - @@[Home] -> [...] -
> ... - like on any other page. If it is very well-established branch
- 200x100 image can be used to stress its topic.
3. I've looked into ImageKnowledgeBase related changes - I have only
one severe concern - sockets should be used only in modmedia - any
ports should be configured in its xml file.
Now probably only server mode is supported by Media - via listening to
incoming connections on configured ports.
By configuration corresponding input/output streams are mapped to
messaging channels and supported on session mode.
If you need aHuman to operate in active mode on socket layer - pls add
specific channel type in media and support mapping input/output to
proper IO messaging channels (no session this case).

If this is done, to map any socket-level conection to IO only xml
configuring should be enough - and subsequent aHuman operations can
use only IO.

Re using knowledge module - it looks to me quite good idea and first
practical thing for its implementation.

Phoenix

unread,
Nov 1, 2010, 8:35:55 AM11/1/10
to Discuss a Human Project
Thanks for the review comments, actually I was thinking on the same
lines to use modmedia for socket connections, but there was
implementation of server in that and I could not find any API to
connect to external system. I wanted to change it than I thought may
be I should do it after your approval as you might have other plans
from that module. Let me know how should I use that module to connect
to external server.

-Sarbjit

On Oct 30, 1:08 pm, Habbit <vsavc...@gmail.com> wrote:
> 1. Detailed Design was intended for the same purpose as Code
> Explanations you created. Pls. merge pages into one no matter how it
> is named.
> 2. When start new page, pls add breadscrumb trail - @@[Home] -> [...] -> ... - like on any other page. If it is very well-established branch
>
> - 200x100 image can be used to stress its topic.
> 3. I've looked into ImageKnowledgeBase related changes - I have only
> one severe concern - sockets should be used only in modmedia - any
> ports should be configured in its xml file.
> Now probably only server mode is supported by Media - via listening to
> incoming connections on configured ports.
> By configuration corresponding input/output streams are mapped to
> messaging channels and supported on session mode.
> If you need aHuman to operate in active mode on socket layer - pls add
> specific channel type in media and support mapping input/output to
> proper IO messaging channels (no session this case).
>
> If this is done, to map any socket-level conection to IO only xml
> configuring should be enough - and subsequent aHuman operations can
> use only IO.
>
> Re using knowledge module - it looks to me quite good idea and first
> practical thing for its implementation.
>
> On 29 ÏËÔ, 22:28, Phoenix <sarbjit.lon...@gmail.com> wrote:
>
> > I have started a new wiki page where anyone can put the explanation of
> > the module he implement and the way to test that module. Kindly use it
> > in future when you write any code so that other person know what all
> > functionality is already there and the way to use that.
>
> > Kindly review the code for ImageKnowledgeBase and let me know in case
> > of any changes required.
>
> > Thanks
>

Vladimir Savchik

unread,
Nov 2, 2010, 5:04:39 PM11/2/10
to ahuman-...@googlegroups.com
Let me add it.

2010/11/1 Phoenix <sarbjit...@gmail.com>

Habbit

unread,
Nov 4, 2010, 4:21:59 AM11/4/10
to Discuss a Human Project
I've added direct channel logic to media and reworked
ImageKnowledgeBase.
It is available by direct AIMedia service interface.
ImageKnowledgeBase is using this interface and communicates to
internal channels.

All is compiled, but to run this needs make explicit configuration in
media.xml - please read media.cpp and ActiveChannel.cpp (configure
method) to find what exactly to add.
Also configuration logic is not added for ImageKnowledgeBase - need to
add it like in media and ActiveChannel.

Some features are still not available:

1. Use temporary connection with specified address.
2. Input/output redirection with internal messaging system.

Let me know if you can complete this yourself or will rely on me.
I will return in couple of days.

btw, when adding some files to project - do not forget to commit
workspace files.

On 3 ноя, 00:04, Vladimir Savchik <vsavc...@gmail.com> wrote:
> Let me add it.
>
> 2010/11/1 Phoenix <sarbjit.lon...@gmail.com>

Phoenix

unread,
Nov 4, 2010, 12:54:27 PM11/4/10
to Discuss a Human Project
I tried to add code but seems file 'ActiveChannel.cpp' is not present
in repo. Is there any file 'ActiveSocket.cpp' as it is giving me build
error for file not found.

-Sarbjit

Phoenix

unread,
Nov 9, 2010, 11:24:23 AM11/9/10
to Discuss a Human Project
Thanks for the changes, I made some configuration changes and this
module is working fine now.

-Sarbjit

Phoenix

unread,
Nov 9, 2010, 2:44:54 PM11/9/10
to Discuss a Human Project
I was making some changes in the code to integrate AIML lib in the
project for chat bot implementation. I faced some problem in file
socketconnection.cpp and channel.cpp. I made some changes to make my
code work.

The problem I faced is that when we create listeners using the
framework API the default message delimiter is 0x01 which is not true
in case of sockets. When i used same APIs it was not working as it was
expecting message to end with 0x01.

Another problem I faced was with session. When i was publishing the
response of chat to channel so that it is returned to the end user the
session was creating problem due to which it was not getting written
to socket.

I changed those APIs to make my code work which seems wrong. As you
have better understanding of framework can you please revert those
changes and make changes in my code or in framework APIs to make it
work.

Also please review the changes that I made and let me know in case it
needs anything to be changed.

-Sarbjit
> ...
>
> read more »

Habbit

unread,
Nov 9, 2010, 3:41:43 PM11/9/10
to Discuss a Human Project
Fascinating activity! That's what I like.

You are right for 0x01 delimiter - it was done for test console API
purposes and I forgot about this.
Actually it is about communication protocol - raw stream, message
stream with fixed delimiters or anything else - e.g. xml messages.
Obviously it can be configured - I will do that.

Re session - not sure it is required - idea is to isolate different
sessions using the same channels.
So either you have session-independent subscription, or use specific
session.
I will check your code to understand whether what you are going to do
actually.

Re API - it is the question whether we need it for anything except
trial tools like debug console.
I'm sure communication can be done w/o any specific protocol - just
common one easy to follow from e.g. Java server (won't you create Java
API for it?)

One thing spotted - you created component in body using Service
interface.
It looks to me wrong - set of services are known on engine level.
Now it become clear that probably each service has one or maybe more
groups of components with specific interface per group and
configuring.
Still this interface (e.g. KnowledgeController) is not a Service.
Reply all
Reply to author
Forward
0 new messages