example for CRF Parameter Learning

868 views
Skip to first unread message

Venkatesh U

unread,
Sep 20, 2012, 8:04:40 AM9/20/12
to lib...@googlegroups.com
Hi,
  I am a research Intern in TU Dortmund. I came across LibDai and found it to be interesting. I would like to know if it supports training of arbitrary graph structures?

My problem is like this:
1. I have a graph structure which has loops and also involves parameter tying, fully connected graph (pairwise)
2. I would like to train a CRF for this structure using stochastic gradient descent and Loopy belief propagation

I am in a process of implementing this my self, but it would save lot of time and effort for me if LibDai, supports this. If LibDai supports this scenario also any pointers to some examples would be very useful. Thanks a lot in advance.

Thanks and Regards,
Venkatesh Umaashankar

Venkatesh U

unread,
Sep 20, 2012, 10:27:49 AM9/20/12
to lib...@googlegroups.com
By going through the code, I think parameter estimation for undirected models is not yet implemented in LibDai. I am currently working in this area and would like to implement it. Could I get some support in understanding the existing parameter estimation code? is there already some plans on architecture?

Joris Mooij

unread,
Sep 20, 2012, 12:04:11 PM9/20/12
to Venkatesh U, lib...@googlegroups.com
Dear Verkatesh,

as you have correctly concluded from the code, libDAI currently does not
support parameter learning for undirected models. It would be a nice addition
to the feature set, though. The existing parameter estimation code for directed
graphical models was written by Charles Vaske and uses the EM algorithm to
handle missing data. What kind of support would you like to have? You can ask
your questions about the code here.

There are no plans regarding architecture/design. The EM code is somewhat orthogonal
to the rest of libDAI. I am not sure whether the best way to implement parameter
estimation for undirected models is by extending the EM code, or by writing some
other independent framework on top of the inference part of libDAI.

Obviously, it would be preferable to have one framework that can handle both
directed and undirected graphs, but I am not sure as to how this can be
designed in an optimal way.

Looking at the EMAlg code, I believe that large part of it is largely specific
to EM and dealing with missing values. Therefore, I believe that in your case I
would start writing a stochastic gradient descent class that is written in a
general way so that it can use any inference algorithm in libDAI. You have to
invent some mechanism for reading data (or use the Evidence class that is part
of the EMAlg code) and for specifying which parameters to learn and how they
are coupled. You could borrow some of the ideas from the EMAlg code.

There is online documentation at http://cs.ru.nl/~jorism/libDAI/doc/
Especially relevant for you are:
http://cs.ru.nl/~jorism/libDAI/doc/fileformats.html
http://cs.ru.nl/~jorism/libDAI/doc/terminology.html
http://cs.ru.nl/~jorism/libDAI/doc/classdai_1_1EMAlg.html

Hope this helps. If you have more questions, this would be the right place to ask.

Best, Joris
> --
> You received this message because you are subscribed to the Google Groups
> "libDAI" group.
> To view this discussion on the web visit https://groups.google.com/d/msg/libdai
> /-/bI4LfYcMwMgJ.
> To post to this group, send email to lib...@googlegroups.com.
> To unsubscribe from this group, send email to
> libdai+un...@googlegroups.com.
> For more options, visit this group at http://groups.google.com/group/libdai?hl=
> en.

Charles Vaske

unread,
Sep 20, 2012, 12:46:34 PM9/20/12
to Joris Mooij, Venkatesh U, lib...@googlegroups.com
Hi Verkatesh,

I'd be happy help out any way that I can. I'm only partially familiar with methods for estimating undirected parameters, but I think that the current API is awfully close to working for this. If you have a particular estimation method in mind, I could help see how it does or does not fit into the framework.

Currently EMalg uses a Prob to represent the sum of expected states across all the data samples. Then there are estimators which take those expectations and estimate a new factor for the next iteration of EM. I think there's room for an intermediate representation between the two, of "parameters," particularly for the estimators that use an underlying model with fewer than the maximum number of parameters. I have a few changes here on a branch, but I have not yet committed the time to merging it back into mainline in a fully backwards compatible manner.

Kind regards,
-Charlie

Venkatesh U

unread,
Sep 20, 2012, 2:21:14 PM9/20/12
to Charles Vaske, Joris Mooij, lib...@googlegroups.com
Hi,
  Thanks for your immediate reply. I think the existing factor graph framework and belief propagation framework, form a major part of parameter estimation for undirected models. I am especially interested in Conditional Random fields.

A simple summary of Parameter learning for CRF using SGD:

 For every instance of sample data, compute empirical feature count  from the instance and
estimated feature count through inference.( Emperical - estimated) is the gradient of the Maximum log likelihood, we just use the gradient with a small learning rate to update the parameters.

here is a detailed slide on what I am interested in doing:

Existing factor graph and BP implementations are sufficient to compute sufficient statistics and expected feature value, I believe. Then we update the parameters for every sample with a learning rate.

I think the missing link is the optimization part only. 

Charles, could you please give a high level overview of how EM is done for directed models? I tried going through the code and I believe it is not optimization based. Correct me if i am wrong. For bayesian networks, I believe we can learn parameters just by counting form empirical distribution the sample data, since it has a closed form solution. 

Any literature reference, or some algorithmic explanation of current implementation for directed models, will help to analyze how much of it can be reused for undirected models.

Thanks,
Venkatesh

Charles Vaske

unread,
Sep 20, 2012, 3:42:27 PM9/20/12
to Venkatesh U, Joris Mooij, lib...@googlegroups.com
On Sep 20, 2012, at 11:21 AM, Venkatesh U <venka...@gmail.com> wrote:
Charles, could you please give a high level overview of how EM is done for directed models? I tried going through the code and I believe it is not optimization based. Correct me if i am wrong. For bayesian networks, I believe we can learn parameters just by counting form empirical distribution the sample data, since it has a closed form solution. 

Any literature reference, or some algorithmic explanation of current implementation for directed models, will help to analyze how much of it can be reused for undirected 

Yes, you are correct, the current EM maximization steps are not optimization based. Starting from some particular set of factors, the EM algorithm is:

1) compute the expectations of all the unobserved variables
2) for an individual factor, sum together expectations over samples as fractional counts, and create a new factor using these fraction counts as the input to a complete data maximum likelihood estimator.

For a Conditional Probability Table P(Y|X) this estimator is very simple: for each setting x in X, set P(Y|x) proportional to the observed frequencies of Y.  If you're familiar with Koller's notation, then pages 868-893 of Koller & Friedman's PGM book discuss this in great detail, and page 873 has the algorithm for CPTs in Bayesian networks. There's also some fantastic discussion comparing EM to gradient ascent.  

One thing about EM is that these estimations are all local and stateless; the results of previous rounds do not matter, and only the expectations from a factor are used to estimate that factor. 

I think that existing l-BFGS libraries like libLBFGS implement this two step loop already, and one only needs to provide the gradient at a specific set of parameters. So I see two potential strategies:

1) subgraph parameter optimization: make another ParameterEstimation subclass that calls libLBFGS to find the next set of parameter values for a subset of the graph
2) global parameter optimization: outside of EMalg, make a C wrapper function that creates a FactorGraph based on the parameters, runs the InferenceAlg, and then converts the expectations to the parameter gradient and returns these back to libLBFGS.

I'd have to think of the theoretical basis of doing (1), the main advantage is that it potentially allows mixed factor estimation strategies in the same graph. However, this may significantly hurt how l-BFGS works; how much state does l-BFGS need to maintain from previous iterations, and does it need to know all the parameters at once? By this, I mean that even though l-BFGS doesn't directly use the Hessian matrix of second-derivatives across the entire graph, it may still need the first-derivatives from all the factors at once. Also, how much state from previous iterations does l-BFGS need? Both of these issue could potentially be worked around in ParameterEstimation subclass, but it may be a lot of work.

I've looked at the code of a single CRF implementation, a chain-CRF, and that used approach (2) with libLBFGS.  If I were only interested in doing whole-graph estimation, my inclination would be that (2) as it is the only way that I'm confident is correct. Mixing estimation strategies seems to be a fairly esoteric advantage that may result in a lot more work. The engineering challenge in the wrapper function is how to map expectations to entries in the gradient array; perhaps the EMalg framework would be helpful here; I'm not entirely sure.

The bulk of the EMalg code is just managing how factors and variables map together; in particular when parameters are shared between multiple factors in the graph. Perhaps this part of the code may be useful even for 2).

Best,
-Charlie

Venkatesh U

unread,
Sep 21, 2012, 9:32:33 AM9/21/12
to lib...@googlegroups.com
Dear Charlie,
  Thanks for the pointers, I went through the appropriate pages in the PGM book and I feel it is best to handle Parameter learning for CRF in a seperate class.

In my first step, I would like to skip parameter sharing between the nodes.

Dear Joris,
   Does libDai currently Support Probabilistic querying? Assume I have a factor graph with below nodes

V1,V2,X
V1,V3,X
V2,V3,X
V1,X
V2,X
V3,X

The variables V1,V2,V3 are hidden nodes, and the variable X is always observed.  is it possible to make queries like 

P( V1 = a, V2=b, v3 = c | X = x )

I think this query can be answered by by setting the potential to zero where X != x in the above factor graph and reducing the same containing only the variables V1, V2, V3 and then running BP. Is there some code which already does this?

This is required to compute the expected feature value, which computing the gradient of Log likelihood. I am analyzing currently, the extra code that would require to implement parameter learning for general CRF without parameter sharing.

Thanks,
Venki

Joris Mooij

unread,
Sep 21, 2012, 4:02:10 PM9/21/12
to Venkatesh U, lib...@googlegroups.com
Dear Venkatesh,

On Fri, Sep 21, 2012 at 03:32:33PM +0200, Venkatesh U wrote:
> Dear Joris,
> � �Does libDai currently Support Probabilistic querying? Assume I have a factor
> graph with below nodes
>
> V1,V2,X
> V1,V3,X
> V2,V3,X
> V1,X
> V2,X
> V3,X
>
> The variables V1,V2,V3 are hidden nodes, and the variable X is always observed.
> �is it possible to make queries like�
>
> P( V1 = a, V2=b, v3 = c | X = x )
>
> I think this query can be answered by�by setting the potential to zero where X
> != x in the above factor graph and�reducing the same containing only the
> variables V1, V2, V3 and then running BP. Is there some code which already does
> this?

Yes, there is, have a look at the FactorGraph::clamp* functions.

Venkatesh U

unread,
Sep 28, 2012, 7:52:36 AM9/28/12
to lib...@googlegroups.com, Venkatesh U, joris...@libdai.org
Dear Joris,

I have started the coding for CRF learning. I am a experienced Java developer, but a new bee in C++. So, please bear with me if there any very trivial questions from me.

I plan to implement CRF learning, based on Stochastic gradient ascent and Gradient ascent, 2 flavors. Initially, I plan to avoid the problem of parameter sharing and focus on providing support for arbitrary graphs.

I would need some help regarding the code. Below is my query 


  Is there a function which returns the index of a value in a factor, provided the list of variables and its states? 

i.e a factor contains the variables V1 (n1 states) , V2 (n2 states) . I have an observations  ( V1 = 0 ), ( V2 = 1) . I need the index for this observation in the factor. Currently is there a function which provides this? I need such a function and was about to write one on my own, but before that wanted to check with you if it is already there. 

Thanks and Regards,
Venkatesh


On Friday, September 21, 2012 10:02:12 PM UTC+2, Joris Mooij wrote:
Dear Venkatesh,

On Fri, Sep 21, 2012 at 03:32:33PM +0200, Venkatesh U wrote:
> Dear Joris,
> � �Does libDai currently Support Probabilistic querying? Assume I have a factor
> graph with below nodes
>
> V1,V2,X
> V1,V3,X
> V2,V3,X
> V1,X
> V2,X
> V3,X
>
> The variables V1,V2,V3 are hidden nodes, and the variable X is always observed.
> �is it possible to make queries like�
>
> P( V1 = a, V2=b, v3 = c | X = x )
>
> I think this query can be answered by�by setting the potential to zero where X
> != x in the above factor graph and�reducing the same containing only the

Joris Mooij

unread,
Sep 28, 2012, 8:52:55 AM9/28/12
to Venkatesh U, lib...@googlegroups.com
Dear Venkatesh,

On Fri, Sep 28, 2012 at 04:52:36AM -0700, Venkatesh U wrote:
> Dear Joris,
>
> I have started the coding for CRF learning. I am a experienced Java developer,
> but a new bee in C++. So, please bear with me if there any very trivial
> questions from me.

No problem.

> I plan to implement CRF learning, based on Stochastic gradient ascent and
> Gradient ascent, 2 flavors. Initially, I plan to avoid the problem of parameter
> sharing and focus on providing support for arbitrary graphs.

Sounds like a good plan.

> I would need some help regarding the code. Below is my query
>
> Is there a function which returns the index of a value in a factor, provided
> the list of variables and its states?
>
> i.e a factor contains the variables V1 (n1 states) , V2 (n2 states) . I have an
> observations ( V1 = 0 ), ( V2 = 1) . I need the index for this observation in
> the factor. Currently is there a function which provides this? I need such a
> function and was about to write one on my own, but before that wanted to check
> with you if it is already there.

Yes, the function calcLinearState in include/dai/varset.h
(and see also its "inverse", calcState).

Best, Joris

> Thanks and Regards,
> Venkatesh

Venkatesh U

unread,
Sep 30, 2012, 10:59:43 AM9/30/12
to lib...@googlegroups.com, Venkatesh U, joris...@libdai.org
Dear Joris,
  How do i compute the probability (likelihood) of an observation? I currently plan to clamp the factor graph based on the observed values and run belief propagation on fg. After running BP how to query the likelihood of the observation?

Thanks,
Venkatesh

Joris Mooij

unread,
Oct 1, 2012, 3:57:09 AM10/1/12
to Venkatesh U, lib...@googlegroups.com
Dear Venkatesh,

You can query the log-partition sum by the InfAlg::logZ() method.

Also, you can take a look at the implementation of the function
calcMarginal in daialg.h/cpp.

Best,
Joris

On Sun, Sep 30, 2012 at 07:59:43AM -0700, Venkatesh U wrote:
> Dear Joris,

Venkatesh U

unread,
Oct 5, 2012, 1:36:17 PM10/5/12
to lib...@googlegroups.com, Venkatesh U, joris...@libdai.org
Dear Joris,

  This is how, I am computing the joint probability of an observation in training data. Could you please check and confirm if this is fine

FactorGraph zClamped = FactorGraph(*crfFG);
        // clamp only the variables that are always observed
for (std::vector<Var>::iterator obsIt = observedVars->begin();
obsIt != observedVars->end(); obsIt++) {
zClamped.clamp(zClamped.findVar(*obsIt), e->find(*obsIt)->second);
}
BP bp((zClamped), opts("updates", string("SEQRND"))("logdomain", false));
bp.init();
bp.run();
Real logPartition = bp.logZ();

FactorGraph obsClamped = FactorGraph(*crfFG);
        //clamp all the variables in the observation ( includes hidden nodes also)
for (Evidence::Observation::const_iterator obsIt = e->begin();
obsIt != e->end(); ++obsIt) {
obsClamped.clamp(obsClamped.findFactor((*obsIt).first),
(*obsIt).second);
}
bp = BP((obsClamped),
opts("updates", string("SEQRND"))("logdomain", false));
bp.init();
bp.run();
Real obsP = bp.logZ();
Real pObs = std::exp(obsP - logPartition);


2) I am almost done with the code. I would need some sample data to test it. Any pointers to some simple datasets such as sprinkler for CRF or any utility for
generating data would be useful.

Thanks,
Venkatesh

Joris Mooij

unread,
Oct 8, 2012, 4:16:52 AM10/8/12
to Venkatesh U, lib...@googlegroups.com
Dear Venkatesh,

On Fri, Oct 05, 2012 at 10:36:17AM -0700, Venkatesh U wrote:
> This is how, I am computing the joint probability of an observation in
> training data. Could you please check and confirm if this is fine

This looks fine, except that "findFactor" call should be a "findVar" call (see
below).

> FactorGraph zClamped = FactorGraph(*crfFG);
> // clamp only the variables that are always observed
> for (std::vector<Var>::iterator obsIt = observedVars->begin();
> obsIt != observedVars->end(); obsIt++) {
> zClamped.clamp(zClamped.findVar(*obsIt), e->find(*obsIt)->second);
> }
> BP bp((zClamped), opts("updates", string("SEQRND"))("logdomain", false));
> bp.init();
> bp.run();
> Real logPartition = bp.logZ();
>
> FactorGraph obsClamped = FactorGraph(*crfFG);
> //clamp all the variables in the observation ( includes hidden nodes
> also)
> for (Evidence::Observation::const_iterator obsIt = e->begin();
> obsIt != e->end(); ++obsIt) {
> obsClamped.clamp(obsClamped.findFactor((*obsIt).first),

Why do you use findFactor? FactorGraph::clamp expects the index of a variable,
not a factor index. This should be findVar, as above...

> (*obsIt).second);
> }
> bp = BP((obsClamped),
> opts("updates", string("SEQRND"))("logdomain", false));
> bp.init();
> bp.run();
> Real obsP = bp.logZ();
> Real pObs = std::exp(obsP - logPartition);

> 2) I am almost done with the code. I would need some sample data to test it.
> Any pointers to some simple datasets such as sprinkler for CRF or any utility
> for
> generating data would be useful.

Have you seen the example programs in the example/ directory? There are a few
example programs involving the sprinkler network, and one of them generates
sprinkler data and writes them to a file (sprinkler.tab).

Best, Joris

Venkatesh U

unread,
Oct 15, 2012, 1:08:16 PM10/15/12
to lib...@googlegroups.com, Venkatesh U, joris...@libdai.org
Dear Joris,
  After running BP on a factor graph, I would like to get the normalized marginals of each factor in the factor graph. Could I use  bp.belief(fg.factor(I).vars()) for this purpose? I went through the code and  I hope this returns normalized marginals for each state of the factor. Could you please confirm this?

Thanks,
Venkatesh

Venkatesh Umaashankar

unread,
Oct 30, 2012, 9:39:20 AM10/30/12
to Joris Mooij, lib...@googlegroups.com, Nico Piatkowski
Hi All,
I am glad to share the inital draft version of CRF parameter
learning, using libDai. This is rather very simple and also show cases
how libDai could be easily extended for
undirected graphical models.The link to the repository is below.

https://bitbucket.org/venkatesh20/libdai_crflearn

This supports parameter learning for arbitrary structured CRF. I have
provided sample data and a wiki, which will help to get started quickly.

Joris,
The code is simple and I have tried my best to keep it clean. Since i am
new to C++, i do not know, how to integrate this example in to libDai
make. I would appreciate any help
and I think the exceptions are mostly not handled, any feeback on the
code is welcome. I am interested in integrating this with libdai.

Next steps will be to support parameter sharing and structure learning.

Thanks,
Venkatesh Umaashankar

On Tuesday 16 October 2012 08:28 AM, Joris Mooij wrote:
> Hi Venkatesh,
> yes, that's one way to do it, but a faster way is to use bp.beliefF(I).
> Best, Joris

Angel Lin

unread,
Mar 2, 2013, 12:47:30 AM3/2/13
to lib...@googlegroups.com, Joris Mooij, Nico Piatkowski
Dear Venkatesh,

Thanks for sharing the CRF parameter learning code. But I have a question about the training input and output. As you use the iris_pairwise2.fg and iris_train.csv as the input, and then generate learned_iris_pairwise2.fg as the training output. 

As you said in https://bitbucket.org/venkatesh20/libdai_crflearn, the input fg file is used only to represent the graphical structure to be learnt, the value of the parameters can be just zero.  Could you please explain more about the fg file and the parameters.

There are 18 factors in iris_pairwise2.fg as follows:
# no of factors
18

4
4 5 0 1
2 2 20 20
0

4
4 5 0 2
2 2 20 20
0

4
4 5 0 3
2 2 20 20
0

4
4 5 1 2
2 2 20 20
0

4
4 5 1 3
2 2 20 20
0

4
4 5 2 3
2 2 20 20
0

4
6 5 0 1
2 2 20 20
0

4
6 5 0 2
2 2 20 20
0

4
6 5 0 3
2 2 20 20
0

4
6 5 1 2
2 2 20 20
0

4
6 5 1 3
2 2 20 20
0

4
6 5 2 3
2 2 20 20
0

4
6 4 0 1
2 2 20 20
0

4
6 4 0 2
2 2 20 20
0

4
6 4 0 3
2 2 20 20
0

4
6 4 1 2
2 2 20 20
0

4
6 4 1 3
2 2 20 20
0

4
6 4 2 3
2 2 20 20
0

attribute variable:
  • 0 - sepal length - discretized to 20 bins
  • 1 - sepal width - discretized to 20 bins
  • 2 - petal length - discretized to 20 bins
  • 3 - petal width - discretized to 20 bins

label variable:
  • 4 - Iris Setosa
  • 5 - Iris Versicolour
  • 6 - Iris Virginica

The second row of each fg block indicates the factor includes 4 variables. Why you just chose two label variables and two attribute variables as the 4 variables of a factor? Could you describe your potential function that has been defined on your graphical structure?

Thanks a lot.

Best,
Angel Lin


在 2012年10月30日星期二UTC-4上午9时39分24秒,Venkatesh U写道:

Venkatesh U

unread,
Mar 2, 2013, 3:27:10 AM3/2/13
to Angel Lin, lib...@googlegroups.com, Joris Mooij, Nico Piatkowski
Hi Angel,

 Basically you could choose any structure, I wanted to capture all the pairwise interactions between labels and pairwise interactions between observed variables at a time and hence you see 4 variables in each factor. 2 from the labels and 2 from the observed variables. You choose the structure as you wish. In this case, this is simply pairwise and only considering 2 observed variables at a time. This would make BP runs faster. 

Please let me know if you have any further questions, and also it would be great if you could share if you have any plans to extend this. I would be willing to collaborate then.

Thanks,
Venkatesh


--
You received this message because you are subscribed to a topic in the Google Groups "libDAI" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/libdai/VFHdkKGUz5E/unsubscribe?hl=en.
To unsubscribe from this group and all its topics, send an email to libdai+un...@googlegroups.com.

To post to this group, send email to lib...@googlegroups.com.

Angel Lin

unread,
Mar 3, 2013, 12:40:39 AM3/3/13
to lib...@googlegroups.com, Angel Lin, Joris Mooij, Nico Piatkowski, venkatesh....@gmail.com

Hi Venkatesh,

In order to make you clear on what I want to do,  I will also give you an factor graph example as follows.

As you can see, the gi and fij are two different kinds of potential functions. But in your CRF code, there is just one kind of potential function defined on  2 label variables and 2 attribute variables. Is your potential function  represented by "exp(theta(theta_index)) or something else? If so, do you know how I can add some other different potential functions into your code?

Also, I used the input files, including iris_pairwise.fg, training file iris_train.csv without the column for 5 and 6, to train the CRF model. But it comes out the following error: "error: Invalid Evidence file: Variable  not known [File src/evidence.cpp, line 49, function: void dai::Evidence::addEvidenceTabFile(std::istream&, std::map<std::basic_string<char>, dai::Var>&)]". Do you know what's the problem of the training file and how to prepare a correct training data according to an input fg file?

Thank you very much for your kind help.

Best,
Angel Lin


在 2013年3月2日星期六UTC-5上午3时27分10秒,Venkatesh U写道:

param...@gmail.com

unread,
Sep 19, 2014, 6:37:51 AM9/19/14
to lib...@googlegroups.com, joris...@libdai.org

hai everyone,
                   I want to know how Parameters in CRF are computed in L-BFGS.
                   what and how are inputs are given are given.
                   how the the derivative of log likelihood function is derived.

                   please help me out in this regard........

thanks

rup...@fbk.eu

unread,
Nov 25, 2014, 7:34:15 AM11/25/14
to lib...@googlegroups.com, joris...@libdai.org, nico.pi...@cs.tu-dortmund.de
The examplar dataset (iris) runs perfectly fine on my compiled code except that the prediction phase at the end returns 0s for the 4/5/6 variables (attributes) instead of predicting these attributes. Is it possible that there is a bug in the code?
Looking forward to hearing from you,
ewelina

amirho...@gmail.com

unread,
Mar 31, 2016, 4:38:26 PM3/31/16
to libDAI, venka...@gmail.com, joris...@libdai.org
I wonder if new release of libDAI supports parameter learning for undirected models.

Joris Mooij

unread,
Apr 1, 2016, 10:20:29 AM4/1/16
to amirho...@gmail.com, libDAI, venka...@gmail.com
Dear Amir,

unfortunately, the answer is no.

libDAI development has stagnaded due to other research interests and I am only doing some maintenance work (bug fixes).

Best, Joris
> To unsubscribe from this group and stop receiving emails from it, send an email
> to libdai+un...@googlegroups.com.
> To post to this group, send email to lib...@googlegroups.com.
> Visit this group at https://groups.google.com/group/libdai.
> For more options, visit https://groups.google.com/d/optout.

Reply all
Reply to author
Forward
0 new messages