UPDATE: Nov 2014

26 views
Skip to first unread message

YKY (Yan King Yin, 甄景贤)

unread,
Nov 29, 2014, 6:04:58 AM11/29/14
to general-in...@googlegroups.com
Hi friends,

1.  Recently I made this video on P=?NP, satisfiability, and linear programming (in Chinese with English subtitles):

but my idea of tackling the NP problem is too vague to be of substance.

2.  I am starting to implement Genifer 3.0 using MapReduce (via Hadoop, and Spark with Scala).

The main difference this time is that it will contain a learner (which I failed to implement last time), and also the Bayesian network evaluation part will be out-sourced (I also failed to do this last time, so that version used only binary logic).

I'm very excited that a significantly usable prototype will soon exist... if anyone is interested in helping, please tell me! :)

--
YKY
"The ultimate goal of mathematics is to eliminate any need for intelligent thought" -- Alfred North Whitehead

Matt Mahoney

unread,
Nov 29, 2014, 7:46:26 AM11/29/14
to general-intelligence
On Sat, Nov 29, 2014 at 6:04 AM, YKY (Yan King Yin, 甄景贤)
<generic.in...@gmail.com> wrote:
> Hi friends,
>
> 1. Recently I made this video on P=?NP, satisfiability, and linear
> programming (in Chinese with English subtitles):
> https://www.youtube.com/watch?v=9MwGPrQ8yKg
>
> but my idea of tackling the NP problem is too vague to be of substance.

Very interesting video. It helps me see the connection between linear
programming and (bounded, linear) neural networks. It does give some
insight into the P=?NP problem, but I don't think we need to solve it.
Human brains do everything that you would want AI to do and haven't
solved it either.

Also, I didn't realize how so many mathematical terms are the same in
English and Chinese.

> 2. I am starting to implement Genifer 3.0 using MapReduce (via Hadoop, and
> Spark with Scala).

You have the hardware to implement this?

Anyway, good to hear from you. It's been awhile.


--
-- Matt Mahoney, mattma...@gmail.com

swkane

unread,
Jan 26, 2015, 2:17:48 PM1/26/15
to general-in...@googlegroups.com
> 2.  I am starting to implement Genifer 3.0 using MapReduce (via Hadoop, and Spark with Scala).

I have tried using Spark at the company I work at. It is very hardware hungry. We spoke with consultants from Cloudera, a Hadoop/Big Data consulting company. They recommended 128 GB of memory minimum per cluster node. It seems to be quite costly. Although, I believe AWS has a service where you can spin Hadoop clusters up and down as you need them.

--
You received this message because you are subscribed to the Google Groups "Genifer" group.
To unsubscribe from this group and stop receiving emails from it, send an email to general-intellig...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

YKY (Yan King Yin, 甄景贤)

unread,
Jan 27, 2015, 8:15:21 AM1/27/15
to general-in...@googlegroups.com
On Tue, Jan 27, 2015 at 3:17 AM, swkane <diss...@gmail.com> wrote:
> 2.  I am starting to implement Genifer 3.0 using MapReduce (via Hadoop, and Spark with Scala).

I have tried using Spark at the company I work at. It is very hardware hungry. We spoke with consultants from Cloudera, a Hadoop/Big Data consulting company. They recommended 128 GB of memory minimum per cluster node. It seems to be quite costly. Although, I believe AWS has a service where you can spin Hadoop clusters up and down as you need them.


​Thanks for the tip.... :)

I and Seth have decided to brew our own MapReduce (for single machines first), which seems not that difficult.

SeH

unread,
Jan 27, 2015, 10:08:23 AM1/27/15
to general-in...@googlegroups.com
(yes spark seems a bit overengineered for this.  one clue is the several-second startup time necessary for running spark example tests, which involve only an in-memory database..)

here's an idea for debugging NARS but should apply in some way to genifer

constructing a graph of resulting "derivation patterns" of the "anonymized" form of logical terms as a tool to verify (hyper-)symmetries of and discover anomalies in the logical processes

https://github.com/opennars/opennars/blob/volatile1/nars_java/src/main/java/nars/logic/meta/Derivations.java

(&&,<A --> (|,[B],C)>,<A --> [B]>,<A --> C>)  (&&,<A --> (|,[B],C)>,<A --> [B]>,<A --> C>)  C:
    null
    (&&,<D --> (|,[B],C)>,<D --> [B]>)

(&&,<A --> (|,[B],C)>,<A --> [B]>,<A --> C>)  (&&,<A --> (|,[B],C)>,<A --> [B]>,<A --> C>)  [B]:
    null
    (&&,<D --> (|,[B],C)>,<D --> C>)
    (&&,<D --> (|,[B],C)>,<D --> [B]>)

graphing the term patterns in a vector space would show the presence or lack of symmetries (for debugging) and "holes" where additional reasoning rules might generate missing results.  even a 2D vector space overview shows what i'm describing.  code for mapping terms to N-dimensional vector spaces:

https://github.com/opennars/opennars/blob/volatile1/nars_lab/src/main/java/nars/rl/TermVectors.java


--

SeH

unread,
Jan 27, 2015, 10:55:41 AM1/27/15
to general-in...@googlegroups.com
<A --> B>  <A --> B>  A:
    null

<A --> B>  <A --> B>  B:
    null

A  <A --> B>  <B --> A>:
    <A --> B>
    <A <-> B>
    <B <-> A>

A  <B --> A>  <A --> B>:
    <A <-> B>
    <B --> A>
    <B <-> A>

A  <A --> B>  <C --> A>:
    <C --> B>  <B --> C>

A  <B --> A>  <A --> B>:
    <A <-> B>

A  <B --> A>  <A --> C>:
    <B --> C>  <C --> B>

A  <B --> A>  <C --> A>:
    <B --> C>  <C --> B>  <B <-> C>  <(&,B,C) --> A>  <(|,B,C) --> A>  <<B --> D> ==> <C --> D>>  <<C --> D> ==> <B --> D>>  <<B --> F> <=> <C --> F>>  (&&,<B --> G>,<C --> G>)
    <B --> C>  <C --> B>  <C <-> B>  <(&,C,B) --> A>  <(|,C,B) --> A>  <<B --> D> ==> <C --> D>>  <<C --> D> ==> <B --> D>>  <<C --> F> <=> <B --> F>>  (&&,<C --> G>,<B --> G>)
Reply all
Reply to author
Forward
0 new messages