Version 0.4.5 release

22 views
Skip to first unread message

Brandon Rohrer

unread,
Feb 28, 2013, 6:41:33 PM2/28/13
to becca...@googlegroups.com
Greetings BECCA users,

I'm pleased to announce the release of a fresh version of BECCA! It's performing better on fewer lines of code.
Here's what's new:

• In a major change, action selection is completely reworked. It now includes
intermediate goals and information-driven exploration. These increased
benchmark performance quite a bit.
• The planner and model have been simplified and absorbed into the actor. This decreased
the size of the code significantly.
• The benchmark worlds now include periodic resets to random states.


In an effort to keep BECCA lean, I've also broken out SeH's TCP server into a separate repo:

https://github.com/brohrer/becca_tcp_server
(SeH, if you'd like me to transfer ownership of this to your automenta account just let me know.)

as well as the world I am currently developing on:

https://github.com/brohrer/becca_world_find_block


You can download version 0.4.5 from Matt's GitHub site:


and from 


Enjoy!

Brandon


SeH

unread,
Feb 28, 2013, 10:15:59 PM2/28/13
to becca...@googlegroups.com
the tcp server was very temporary and i don't recommend using it without some obvious improvements like transferring in binary instead of text.

what might be even better is to implement an RL-glue interface so that BECCA can be compared with other RL agents in different environments

RL-Glue (Reinforcement Learning Glue) provides a standard interface that allows you to connect reinforcement learning agents, environments, and experiment programs together, even if they are written in different languages.

To use RL-Glue, you first install the RL-Glue Core, and then the codec (listed in the left navigation bar) that allows your language(s) of your choice talk to each other. The core project and all of the codecs are cross-platform compatible, and can run on Unix, Linux, Mac, and Microsoft Windows (sometimes under Cygwin).



You might also want to add an entry for BECCA at http://mloss.org and similar websites

Thanks!



Brandon


--
You received this message because you are subscribed to the Google Groups "BECCA_users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to becca_users...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.
 
 

Brandon Rohrer

unread,
Mar 1, 2013, 9:21:55 AM3/1/13
to becca...@googlegroups.com
SeH,

Great idea. It's not in my development path right now, but anyone who takes on RL-Glue integration would definitely earn a gold star. Unless you object, I'll keep your TCP server code available. You may not be satisfied with it as a final product, but right now it's all the BECCA code base has.

Brandon

SeH

unread,
Mar 1, 2013, 10:05:46 AM3/1/13
to becca...@googlegroups.com
google results for 'numpy opencl' (for accelerating BECCA with GPU)

Py(thon)OpenCL overview all what is needed for productive coding with OpenCL:
  • – supports nearly all OS's (Linux, OS X, Windows)
  • – supports OpenCL 1.2 (latest release)
  • – is complete
  • – allows interactive mode
  • – automatically manages resources 
  • – automatically checks for (and reports) errors
  • – integrates with NumPy package

---

also i'm wondering what features overlap between BECCA, Numenta HTM, OpenCog DeSTIN and Q-Learning, as well as any other algorithms that may be related.

https://github.com/opencog/opencog/tree/master/opencog/embodiment/DestinCudaAlt

DeSTIN stands for Deep SpatioTemporal Inference Network and is a scalable deep learning architecture that relies on a combination of unsupervised learning and Bayesian inference. A paper by the inventors of this method is available.  Briefly put, DeSTIN uses on online clustering algorithm to hierarchically create centroids in a way that loosely mimics the way humans understand things.

"Personally, I would like to see OpenCL succeed. It has the right ingredients as a standard--mainly run-time code generation and reasonable support of heterogeneous computing. On top of that, being in a multi-vendor marketplace is a good thing--also for Nvidia, although they might not immediately see it that way."
 
If I was starting something new, I would likely go with OpenCL, unless I desperately needed one of the proprietary CUDA libraries.

Brandon Rohrer

unread,
Mar 1, 2013, 7:11:33 PM3/1/13
to becca...@googlegroups.com
Thanks for the heads up on GPU resources. I would love to see what kind of a speedup they would give, but I doubt I'll get to it myself in the near future. 

>also i'm wondering what features overlap between BECCA, Numenta HTM, OpenCog DeSTIN and Q-Learning, as well as any other algorithms that may be related.

This is an excellent question. I try to answer it in sections E1 and E2 of the BECCA Users Guide:

Brandon

Matt Chapman

unread,
Mar 1, 2013, 9:18:59 PM3/1/13
to becca...@googlegroups.com
Also, regarding DeSTIN, I would say that it's applicability to action selection is not obvious the way it is in BECCA.

For a Crude and probably misleading analogy, BECCA is to DeSTIN as Forward Feed Neural Net is to Hopfield Net.

But DeSTIN is *NOT* actaully any kind of Hopfield net; please don't misunderstand me. It has much more in common with HTM, as far as I can tell from what little is known about HTM. But both Hopfield Nets and DeSTIN operate on the idea of attractors that cause the entire network to settle into some classifiable pattern, as opposed to proceeding directly to some explicit output.

^^ One hack's perspective. 

All the Best,

Matt Chapman
Ninjitsu Web Development
http://www.NinjitsuWeb.com
ph: 818-660-6465 (818-660-NINJA)
fx: 888-702-3095

http://www.linkedin.com/profile/view?id=13333058

If you want to endorse my skills on Linked In, the most valuable endorsements to me are "Open Source" and "Software Development."

--
The contents of this message should be assumed to be Confidential, and may not be disclosed without permission of the sender.

SeH

unread,
Mar 2, 2013, 9:44:27 AM3/2/13
to becca...@googlegroups.com
are there any applications for BECCA's feature extractor separated from the reinforcement learner?  ex: an interface that wraps only the feature extractor, with:
  • input: sensors (but not necessarily primitives)
  • output: feature activity
(this may have been already been discussed before but): might there be any advantages to instantiating a hierarchy of feature extractors?  since BECCA's feature extractor itself seems designed to process hierarchical features already, this would be redundant and decrease the generality.  but the API mentioned above would allow formation of explicit feature extractor hierarchies, possibly containing feedback loops, or time-delayed feedback loops.

on page 20 of http://www.sandia.gov/~brrohre/doc/becca_0.4.5_users_guide.pdf:

if 4.1.2 "Perceiver" corresponds to the "feature extractor" box and 4.1.3 "Actor" corresponds to the "reinforcement learner" box maybe this can be made clear in the diagram.

page 62 minor correction

"once source" -> "one source"

Brandon Rohrer

unread,
Mar 4, 2013, 10:05:25 AM3/4/13
to becca...@googlegroups.com
Definitely. I've played around with just the feature extractor, but haven't considered the possibility of combining them in an explicit hierarchy. I would be very interested to see it done. But you make an excellent point that for solving some problems, you just want to learn the features.

And thanks for the edits! I'll modify the guide for the next version.

Brandon

SeH

unread,
Mar 5, 2013, 9:07:01 PM3/5/13
to becca...@googlegroups.com
it might be interesting to port BECCA back to Java and integrate with Encog which already provides GPU kernels for several AI algorithms

https://github.com/encog/encog-java-core

Matt Chapman

unread,
Mar 5, 2013, 9:20:42 PM3/5/13
to becca...@googlegroups.com

I bet Becca will run under jython...

All the Best,
Matt Chapman

Brandon Rohrer

unread,
Mar 5, 2013, 10:11:44 PM3/5/13
to becca...@googlegroups.com
That would be cool, especially as the underlying algorithms mature and stop changing so dramatically with each release. Unfortunately I'm not sure when that will be... :)
Reply all
Reply to author
Forward
0 new messages