Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Thinking of moving from NeuroShell 2 to R-Project, Comments?

33 views
Skip to first unread message

TomH488

unread,
Dec 22, 2014, 3:32:24 PM12/22/14
to
NS2 is just too limited and frankly, dead.

I'm thinking R-project or something like EasyNN+ which are actually quite contradictory (huge -v- simple.) I have spent time working with Membrain and NetMaker and quit because of inadequate documentation. C-MU's divested Emergent appears to be massively capable but undecipherable. Oh, and NetMakers Block GUI approach was novel but again suffered from "GUI-Hole" where you cannot determine what you actually ran because the settings are scattered over a fast array of GUI's.

What is nice about scripting control and module calls in code is that you can read the calls and params and know exactly what you just did or are going to do.

I like the idea of open source which eliminates dead ends.

However, I'm trying to find if there is any community out there to discuss matters and get help if there are questions with the platform.

I'm still amazed at how little there is for neural networks. There isn't even a web-based forum for Nnet discussions. And some platforms use email trees which is truly sick.

There is a fair amount of MatLab but that is crazy expensive and when I did the trial on the Nnet Toolbox, I found all kinds of problems with their code and quickly ran the other way. And gee, I could have all that for 5-figures.

Oh, and CUDA!

Currently I'm dropping BackProp for PNN and GRNN which may be a mistake, but the simplicity and efficiency offered is hard to beat. My back tests take 2 weeks of computer time and I'm not even scratching the surface averaging 3 random number seeds.

And still would like to find a truly active place where EVERYONE goes for Nnet cross-polination.

Thanks
Tom

patriot

unread,
Dec 23, 2014, 7:41:25 AM12/23/14
to
Tom,
I agree - let's start up a community conversations (unless everyone is concerned Google is here to pick our pockets)

I don't know much of anything about the existing platforms, on purpose. Deriving for myself what is needed is how I learn and I do love to sling code.

My current sandbox (aka JCN) is evolving with these primary requirements 1) Must be OO - just plug new methods or override existing 2) Built around a standard protocol between all methods that contains input sets, hidden layer weights, and outputs including metadata describing the context, configuration, and case(s) 3) Must be thread safe for multiprocessing (contains state) 4) An abstract active data dictionary for speed and flexibility (I call it a dynamic dictionary and no, I refuse to burn cycles using XML for this) - supports both instance description and time series data from input sources or derived 5) Self-adjusting periodicity - want to effortlessly zoom in and out changing granularity while bridging gaps 6) Simple input data definition, load, filters, and transformation - but everything beyond is to be abstracted - subject area independent.

The current GUI looks like a it was written by a second year college student but that is not important. If I get bored I may slap on a pretty graphical sandbox metaphor using a pallet of colorful objects with smartly curving lines just to make the mathematician types drool; but I don't plan on commercializing this and there are more interesting challenges taking priority. At this point it is a scripting language within its own IDE, but joking aside dragging and connectivity objects might be a user friendly way to perform the configurations. Until then I'll add context sensitive pop-ups in the short term to help keep track of methods(functions), syntax, and options.

Discussing this with one of the Big Data companies they didn't see it. Assembling and normalizing all the data from across the globe isn't enough. In short I want the speed and flexibility to experiment and fail fast; very fast.

Well, that should provide sufficient rope for a proper intellectual lynching seeing how new I am at this. But what the heck, I'm in need of an AI intervention anyway.

Given your experience I would be curious as to how this aligns with what you've seen and what additional requirements you have in mind. (keeping in mind Google is looking over your shoulder)

TomH488

unread,
Dec 23, 2014, 2:14:25 PM12/23/14
to
First, I completely HATE Object Oriented programming.

I just looked at the GRNN Package in R and it consisted of 6 files and about 30 lines of code total. And this code was just some "front end" simplification of the stuff they called.

In fact, the "kernal" was a call to "nn" which is a "general regression neural network with or without training."

Of course, as in all OO, WHERE IS NN?

Its probably a few hundred lines of code scattered over 5 packages or more in many code files in each package. So what? maybe scattered over 50 code files? 200?

I'm tempted to start running Timothy Masters' code which is listed in his books. Nice clear FORTRAN like C++ code. Easy to read. Easy to understand. Easy to modify.

My partner is OO to the max and beats me up pretty good in my "backward ways."

Areas that are hit or miss in the codes I have mentioned are:

1) Error functions - VERY VERY IMPORTANT (cross entropy for one!)
2) Input Culling - virtually non-existant

And basically completely missing is any kind of place to get help. Emergent is so massively deep and wide yet there is virtually nothing to explain it.

And the biggest missing thing is usually examples which I find the most helpful.

For now, I'm still hanging on to NS2 which does have PNN and GRNN which due to their simplistic inputs may be OK with NS2. Backprop and Feedback nets are another matter.

patriot

unread,
Dec 24, 2014, 12:19:22 AM12/24/14
to
Six weeks back I wrote my first "input culling" routine. When I had previously 'suggested' input pruning heuristics they usually hurt the results. Perhaps the NN in its own way was telling me to stick to coding and let it make the decisions. The machine approach was fairly simple; iterate incrementally through each neuron's normalized range in order to train the pruner as to where the results began to suffer; increased the result error. The outcome was less than spectacular. Four follow ups: 1) Segment the input into clusters and retest with the narrow focus 2) experiment with different training targets (flip the approach and make the bad results the target as one example) 3) Find ways to identify the predictive neurons (as I've done in the hidden layer) 4) Once 2 and 3 are complete, evaluate permutations of the predictive neuron's increments (fuzzy logic of sorts)

In other words, the neural network becomes a network of neural networks - input clustering, input pruning, neuron pruning, neuron layering, neuron weighting ...(which all could be different for N number of training and production use cases) ... and then finally training the beast. Does every investigation have to lead to three more approaches? ... it never ends which is why the sandbox needs to be fast and flexible plus the method protocol self contained.

I'm an old school compiler writer (why generate assembler when you can go straight to machine code) that went to the dark side (IT Sr. Management). Thinking recursively in abstract objects evolved long before some consultant popularized the term OO to make a buck. My first commercial product was 150 methods with few longer than a page. It had a run of over 20 years and was so error free I awarded a prize when one was found. It would be painful to write multi-threaded or event driven code that wasn't OO. Eighteen months ago I returned to my AI roots as a hobby. Out of curiosity, have the math professors attempting AI with their boring LP's been replaced by an open source class library? :-)

TomH488

unread,
Jan 11, 2015, 11:50:03 PM1/11/15
to
The thought of separating the input based on its clusters is something I've hypothesized would work.

Right now I'm trying GRNN and PNN's instead of Back Prop.

What intrigues me is looking at the training set cases as nodes in a n-dimension finite element model:

Simply mesh it and use element shape functions to take answers at the nodes to the desired predicted case.

It seems like GRNN and PNN seem to work like that.

I'm going to post a thread asking when not to use GRNN and PNN
0 new messages