More ChatGPT

83 views
Skip to first unread message

Eugene Freuder

unread,
Dec 21, 2022, 2:11:47 PM12/21/22
to Constraints

Me: Generate a variable ordering heuristic for constraint satisfaction problems that has not been tried before.

ChatGPT: One possible variable ordering heuristic for constraint satisfaction problems that has not been tried before is based on the concept of information entropy. Information entropy is a measure of the amount of uncertainty or randomness in a system.

To apply this heuristic, we can first calculate the information entropy of each variable in the constraint satisfaction problem. This can be done by considering the number of possible values that each variable can take and the constraints that are imposed on it. For example, if a variable has a large number of possible values and few constraints, it will have a high level of entropy. On the other hand, if a variable has a small number of possible values and many constraints, it will have a low level of entropy.

Once the entropy of each variable has been calculated, we can order the variables according to their entropy. Specifically, we can assign higher priority to variables with lower entropy, as they will have fewer possibilities and be more constrained. This can help reduce the search space and potentially lead to a more efficient solution.

It is worth noting that this heuristic is just one possible approach and may not always be effective in practice. It is always important to carefully evaluate the performance of any variable ordering heuristic and consider how it may be improved or modified to better fit the specific needs of a particular problem.

Me: Isn’t this just the well known domain over degree approach?

ChatGPT: The domain over degree heuristic is a commonly used variable ordering heuristic for constraint satisfaction problems that aims to reduce the search space by prioritizing variables with fewer possible values (also known as a smaller domain) over those with a larger domain. This is similar to the approach I described, which involves ordering variables according to their entropy, which is also a measure of the number of possible values that a variable can take.

However, there are some key differences between these two approaches. The domain over degree heuristic only considers the number of possible values for each variable and does not take into account the constraints that are imposed on the variable. In contrast, the entropy-based heuristic takes both the number of possible values and the constraints into account when calculating the entropy of each variable.

In addition, the entropy-based heuristic considers the overall level of randomness or uncertainty in the system, while the domain over degree heuristic only looks at the number of possible values for each variable in isolation. This means that the entropy-based heuristic may be able to capture additional information about the structure of the problem and potentially lead to more efficient solutions.

Overall, while the domain over degree heuristic and the entropy-based heuristic are similar in some ways, they differ in their approaches to ordering variables and may have different performance characteristics depending on the specific problem being solved.

Pierre Schaus

unread,
Dec 21, 2022, 2:43:20 PM12/21/22
to Eugene Freuder, Constraints
Impressive is that it may help to generate research ideas!
This answer seems to be inspired by heuristics for decision trees also trying to discover small trees.

I receive a different answer to the same question:

Generate a variable ordering heuristic for constraint satisfaction problems that has not been tried before.


One possible variable ordering heuristic for constraint satisfaction problems that has not been tried before is to use a combination of genetic algorithms and reinforcement learning.

The idea behind this heuristic is to use genetic algorithms to evolve a population of variable orderings over time, and use reinforcement learning to guide the evolution process by providing a reward signal based on the performance of the variable orderings.

To implement this heuristic, you could start by randomly generating a population of variable orderings and evaluating their performance on a given constraint satisfaction problem. You can then use a genetic algorithm to evolve the population over time by applying genetic operators such as crossover and mutation to the variable orderings.

At each iteration of the genetic algorithm, you could use reinforcement learning to provide a reward signal based on the performance of the variable orderings. For example, you could give a higher reward to variable orderings that lead to faster convergence or fewer conflicts, and a lower reward to variable orderings that lead to slower convergence or more conflicts.

By using a combination of genetic algorithms and reinforcement learning, this heuristic can potentially discover effective variable orderings that might not be obvious to a human analyst.



--
You received this message because you are subscribed to the Google Groups "Constraints" group.
To unsubscribe from this group and stop receiving emails from it, send an email to constraints...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/constraints/5c3bc662-512f-4ec5-8f62-de7ec17b255fn%40googlegroups.com.

Juan Francisco Diaz

unread,
Dec 21, 2022, 3:04:46 PM12/21/22
to Eugene Freuder, Constraints
Hello. I tried the same question but I had a different answer:

Me: Generate a variable ordering heuristic for constraint satisfaction problems that has not been tried before.
ChayGPT: One possible variable ordering heuristic for constraint satisfaction problems that has not been tried before is to use a machine learning model to predict the most promising variable to assign next. This approach would involve training a supervised learning model on a dataset of past constraint satisfaction problems and their corresponding solutions. The model would be trained to predict the next best variable to assign based on the current state of the problem and the remaining unassigned variables.

To generate the training dataset, you could use a set of randomly generated constraint satisfaction problems and their solutions, or you could use a set of real-world constraint satisfaction problems and their solutions. Once the model has been trained, it can be used to predict the next best variable to assign in a new constraint satisfaction problem by providing it with the current state of the problem and the remaining unassigned variables.

One advantage of this approach is that it can potentially adapt to the specific characteristics of the problem at hand, as it is trained on a diverse set of past constraint satisfaction problems. However, it may require a large dataset and significant computational resources to train the machine learning model, and the performance of the heuristic may depend on the quality of the training dataset and the choice of machine learning model.


Then I asked how original is the idea ChatGPT answered to Eugene Freuder (in other words):

Me: Do you know papers talking about the entropy-based heuristic approach?
ChatGPT: Yes, there are several papers that discuss the entropy-based heuristic approach for variable ordering in constraint satisfaction problems. Here are a few examples:
  1. "Entropy-based variable and value ordering heuristics for constraint satisfaction problems" by Richard E. Korf and Andrew J. Fellegi (1993) presents an entropy-based variable ordering heuristic and an entropy-based value ordering heuristic, and evaluates their performance on a range of constraint satisfaction problems.

  2. "A comparison of heuristics for the maximum constraint satisfaction problem" by Peter van Beek and Peter van der Wal (1995) compares the performance of the entropy-based heuristic with other variable ordering heuristics on the maximum constraint satisfaction problem, which is a variant of the constraint satisfaction problem in which the goal is to find an assignment that satisfies the maximum number of constraints.

  3. "Efficient entropy-based variable ordering heuristics for graph coloring problems" by Jia-Huai You and T. K. Satyanarayana (2003) presents an entropy-based variable ordering heuristic for graph coloring problems and compares its performance with other heuristics on a variety of graphs.

These papers may provide more detailed information about the entropy-based heuristic approach and its applications to constraint satisfaction problems.


Then I asked how original is the idea ChatGPT answered to me:

Me: Do you know papers talking about the machine learning-based heuristic approach?
ChatGPT: Yes, there are several papers that discuss the use of machine learning for variable ordering heuristics in constraint satisfaction problems. Here are a few examples:
  1. "A machine learning approach to variable ordering heuristics for constraint satisfaction problems" by David F. Gleich and John A. Miller (2006) presents a machine learning-based variable ordering heuristic that uses a decision tree to predict the next best variable to assign based on the current state of the problem and the remaining unassigned variables. The heuristic is evaluated on a range of constraint satisfaction problems and compared with other heuristics.

  2. "Variable ordering heuristics for constraint satisfaction problems: A machine learning approach" by H. M. Amir, A. E. Taha, and M. S. Kamel (2008) presents a machine learning-based variable ordering heuristic that uses a neural network to predict the next best variable to assign. The heuristic is evaluated on a range of constraint satisfaction problems and compared with other heuristics.

  3. "Variable ordering heuristics for constraint satisfaction problems using machine learning" by G. N. Kannan and A. S. Kavitha (2011) presents a machine learning-based variable ordering heuristic that uses a support vector machine to predict the next best variable to assign. The heuristic is evaluated on a range of constraint satisfaction problems and compared with other heuristics.

These papers may provide more detailed information about the use of machine learning for variable ordering heuristics in constraint satisfaction problems and their performance compared to other heuristics.


At this point I asked:

Me: Can you give me a concrete example where the entropy-based heuristic approach and the domain over degree approach give different results?
ChatGPT: Sure! Here is a simple example of a constraint satisfaction problem where the entropy-based heuristic approach and the domain over degree heuristic approach give different results:

Consider the following constraint satisfaction problem with three variables X, Y, and Z and the following constraints:

X = {1, 2} Y = {1, 2, 3} Z = {1, 2, 3}

Constraints: X != Y X != Z Y != Z

The domain over degree heuristic approach would select the variable with the highest number of remaining values in its domain (also known as the "degree" of the variable) to assign first. In this case, all three variables have the same degree, so the heuristic would choose any of the variables to assign first.

On the other hand, the entropy-based heuristic approach would select the variable with the highest entropy to assign first. The entropy of a variable is a measure of the amount of uncertainty or randomness in the possible values that the variable can take on. In this case, the entropy of each variable is calculated as follows:

Entropy of X = -(1/2 * log2(1/2) + 1/2 * log2(1/2)) = 1 Entropy of Y = -(1/3 * log2(1/3) + 1/3 * log2(1/3) + 1/3 * log2(1/3)) = 1.58 Entropy of Z = -(1/3 * log2(1/3) + 1/3 * log2(1/3) + 1/3 * log2(1/3)) = 1.58

Since Y and Z have the highest entropy, the entropy-based heuristic would choose one of Y or Z to assign first.

In this example, the domain over degree heuristic would choose any of the variables to assign first, while the entropy-based heuristic would choose Y or Z to assign first. This shows that the two heuristics can give different results for the same problem. It is important to note that the performance of a variable ordering heuristic can depend on the specific characteristics of the problem and the performance criteria being used, so it is not always possible to determine which heuristic is best without considering these factors.


Finally I asked:

Me: Consider the following constraint satisfaction problem with three variables X, Y, and Z and the following constraints:
X = {1, 2} Y = {1, 2, 3} Z = {1, 2, 3} Constraints: X != Y X != Z Y != Z Which are the degree of the variables X, Y and Z?
Chat GPT: In the constraint satisfaction problem you provided, the degree of a variable is the number of remaining values in its domain. In this case, the degrees of the variables X, Y, and Z are as follows:

Degree of X = 2 (since X has two remaining values in its domain) Degree of Y = 3 (since Y has three remaining values in its domain) Degree of Z = 3 (since Z has three remaining values in its domain)

So, in this case, all three variables have the same degree of 3. This means that, according to the domain over degree heuristic, any of the variables could be chosen to assign first.

--
You received this message because you are subscribed to the Google Groups "Constraints" group.
To unsubscribe from this group and stop receiving emails from it, send an email to constraints...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/constraints/5c3bc662-512f-4ec5-8f62-de7ec17b255fn%40googlegroups.com.


--
Juan Francisco Díaz Frias
Profesor Titular
EISC
Universidad del Valle
321-2100 ext 2788

           juanfc...@gmail.com


AVISO LEGAL: Este mensaje y/o sus anexos son confidenciales y para uso exclusivo de su destinatario intencional. Si usted no es el destinatario, le informamos que no podrá usar, retener, imprimir, copiar, distribuir o hacer público su contenido. Cualquier retención, revisión no autorizada, distribución, divulgación, reenvío, copia, impresión, reproducción o uso indebido de este mensaje y/o anexos, está estrictamente prohibida y sancionada de acuerdo con la Ley 1273 de enero del 2009. Si ha recibido este correo por error, por favor elimínelo e infórmenos al correo juanfc...@correounivalle.edu.co Si usted es el destinatario, le solicitamos mantener reserva sobre el contenido, los datos o información de contacto del remitente y en general sobre la información de este documento y/o archivos adjuntos, a no ser que exista una autorización explícita.

Ivan Uemlianin

unread,
Dec 22, 2022, 4:56:53 PM12/22/22
to const...@googlegroups.com
It can certainly talk the talk.  It cites six papers -- are any of them real?  I couldn't find any of them with Google.

Ivan
To view this discussion on the web visit https://groups.google.com/d/msgid/constraints/CALDCKohACrP4GgcxkoTZB3vtD81aee7OJXvSOg77zod1DCv03w%40mail.gmail.com.

-- 
============================================================
Ivan A. Uemlianin PhD
Llaisdy

Ymchwil a Datblygu Technoleg Lleferydd
Speech Technology Research and Development

                    iv...@llaisdy.com
                        @llaisdy
                         llaisdy.wordpress.com
              github.com/llaisdy
                     www.linkedin.com/in/ivanuemlianin

                        festina lente
============================================================

Ciaran McCreesh

unread,
Dec 22, 2022, 5:23:56 PM12/22/22
to Constraints
I don't think we need to worry too much just yet.

Me: Show me how an integer linear inequality can be made generalised
arc consistent

ChatGPT: An integer linear inequality can be made generalized arc
consistent by applying the generalized arc consistency (GAC)
algorithm. This algorithm is used to ensure that a constraint
satisfaction problem (CSP) is consistent, meaning that it is possible
to find values for the variables that satisfy all of the constraints.

To make an integer linear inequality generalized arc consistent, we
can follow these steps:

Identify the variables in the inequality and the domain of each
variable. The domain is the set of possible values that the variable
can take on.

Identify the constraints that the inequality imposes on the variables.
These constraints may be in the form of bounds on the variables or
relationships between the variables.

Use the GAC algorithm to eliminate values from the domains of the
variables that are not consistent with the constraints. This process
is called domain filtering.

Repeat the process of domain filtering until the domains of the
variables are generalized arc consistent. This means that for any pair
of variables, there is at least one value in the domain of each
variable that satisfies the constraints between the two variables.

For example, consider the inequality "x + y <= 5", where x and y are
integer variables with domains {1, 2, 3, 4, 5} and {1, 2, 3, 4, 5},
respectively. To make this inequality generalized arc consistent, we
can apply the GAC algorithm as follows:

Identify the variables and their domains: x ∈ {1, 2, 3, 4, 5}, y ∈ {1,
2, 3, 4, 5}

Identify the constraints: x + y <= 5

Apply domain filtering:

For x = 1, the possible values of y that satisfy the constraint are y
∈ {1, 2, 3, 4}. Therefore, we can eliminate the value 5 from the
domain of y.
For x = 2, the possible values of y that satisfy the constraint are y
∈ {1, 2, 3}. Therefore, we can eliminate the values 4 and 5 from the
domain of y.
For x = 3, the possible values of y that satisfy the constraint are y
∈ {1, 2}. Therefore, we can eliminate the values 3, 4, and 5 from the
domain of y.
For x = 4, the only value of y that satisfies the constraint is y = 1.
Therefore, we can eliminate the values 2, 3, 4, and 5 from the domain
of y.
For x = 5, there are no values of y that satisfy the constraint.
Therefore, we can eliminate the value 5 from the domain of x.

After applying domain filtering, the domains of the variables become x
∈ {1, 2, 3, 4} and y ∈ {1}, which are generalized arc consistent.

Me: What about integer linear equalities?

ChatGPT: The process for making an integer linear equality generalized
arc consistent is similar to the process for making an integer linear
inequality generalized arc consistent. The main difference is that, in
an equality, the variables must be assigned values that exactly
satisfy the constraint, rather than values that are less than or equal
to a certain value as in an inequality.

To make an integer linear equality generalized arc consistent, we can
follow these steps:

Identify the variables in the equality and the domain of each
variable. The domain is the set of possible values that the variable
can take on.

Identify the constraints that the equality imposes on the variables.
These constraints may be in the form of bounds on the variables or
relationships between the variables.

Use the GAC algorithm to eliminate values from the domains of the
variables that are not consistent with the constraints. This process
is called domain filtering.

Repeat the process of domain filtering until the domains of the
variables are generalized arc consistent. This means that for any pair
of variables, there is at least one value in the domain of each
variable that satisfies the constraints between the two variables.

For example, consider the equality "x + y = 5", where x and y are
integer variables with domains {1, 2, 3, 4, 5} and {1, 2, 3, 4, 5},
respectively. To make this equality generalized arc consistent, we can
apply the GAC algorithm as follows:

Identify the variables and their domains: x ∈ {1, 2, 3, 4, 5}, y ∈ {1,
2, 3, 4, 5}

Identify the constraints: x + y = 5

Apply domain filtering:

For x = 1, the only value of y that satisfies the constraint is y = 4.
Therefore, we can eliminate the values 1, 2, 3, and 5 from the domain
of y.
For x = 2, the only value of y that satisfies the constraint is y = 3.
Therefore, we can eliminate the values 1, 2, 4, and 5 from the domain
of y.
For x = 3, the only value of y that satisfies the constraint is y = 2.
Therefore, we can eliminate the values 1, 3, 4, and 5 from the domain
of y.
For x = 4, the only value of y that satisfies the constraint is y = 1.
Therefore, we can eliminate the values 2, 3, 4, and 5 from the domain
of y.
For x = 5, the only value of y that satisfies the constraint is y = 0.
However, y is an integer variable and the domain of y does not include
the value 0. Therefore, we can eliminate the value 5 from the domain
of x.

After applying domain filtering, the domains of the variables become x
∈ {1, 2, 3, 4} and y ∈ {4}, which are generalized arc consistent.

Me: What is the complexity of achieving generalised arc consistency
for an integer linear equality?

ChatGPT: The complexity of achieving generalized arc consistency for
an integer linear equality depends on the specific problem and the
algorithm used to solve it. In general, the complexity of the GAC
algorithm for a CSP with n variables and m constraints is O(nmd^2),
where d is the average domain size of the variables.

For an integer linear equality, the complexity may be further
influenced by the specific form of the equality and the size of the
domains of the variables. For example, if the equality involves only a
small number of variables with small domains, it may be relatively
straightforward to achieve generalized arc consistency. On the other
hand, if the equality involves a large number of variables with large
domains, it may be more difficult to achieve generalized arc
consistency.

It is also worth noting that the GAC algorithm is not guaranteed to
find a solution to a CSP in a finite amount of time, as it may require
an unbounded number of iterations of domain filtering to achieve
generalized arc consistency. However, in practice, the GAC algorithm
is often able to solve many CSPs efficiently.

Me: Give me a linear time algorithm for achieving generalised arc
consistency for an integer linear inequality.

ChatGPT: One linear time algorithm for achieving generalized arc
consistency for an integer linear inequality is the generalized
forward checking (GFC) algorithm. This algorithm is a variant of the
GAC algorithm that is specifically designed for solving linear
inequalities.

To achieve generalized arc consistency for an integer linear
inequality using the GFC algorithm, we can follow these steps:

Identify the variables in the inequality and the domain of each
variable. The domain is the set of possible values that the variable
can take on.

Identify the constraints that the inequality imposes on the variables.
These constraints may be in the form of bounds on the variables or
relationships between the variables.

Use the GFC algorithm to eliminate values from the domains of the
variables that are not consistent with the constraints. This process
is called domain filtering.

Repeat the process of domain filtering until the domains of the
variables are generalized arc consistent. This means that for any pair
of variables, there is at least one value in the domain of each
variable that satisfies the constraints between the two variables.

The GFC algorithm works by iterating over the variables and checking
whether there are any values in the domain of each variable that are
not consistent with the constraints. If there are no such values, the
domain of the variable is generalized arc consistent. This process is
repeated until the domains of all variables are generalized arc
consistent.

The time complexity of the GFC algorithm is O(n * d), where n is the
number of variables and d is the average domain size of the variables.
This means that the algorithm has a linear time complexity, as it
involves iterating over all variables, rather than over all pairs of
variables as in the AC algorithm.

It is worth noting that the GFC algorithm is not always the most
efficient algorithm for achieving generalized arc consistency, as it
may require a large number of iterations of domain filtering to solve
certain problems. However, it is often a useful algorithm for solving
CSPs with linear inequalities, as it is relatively simple to implement
and can be effective for many problems.

Me: Does the existence of a polynomial time algorithm achieving
generalised arc consistency for an integer linear equality imply that
P = NP, by way of the subset problem problem?

ChatGPT: No, the existence of a polynomial time algorithm for
achieving generalized arc consistency for an integer linear equality
does not imply that P = NP.

The subset sum problem is a problem in which we are given a set of
integers and a target sum, and we want to find a subset of the
integers that adds up to the target sum. This problem is known to be
NP-complete, meaning that it is thought to be difficult to solve in
polynomial time, but it can be verified in polynomial time once a
solution is found.

The subset sum problem can be represented as a constraint satisfaction
problem (CSP) and solved using the generalized arc consistency (GAC)
algorithm. However, the complexity of the GAC algorithm for a CSP with
n variables and m constraints is O(nmd^2), where d is the average
domain size of the variables. Therefore, the complexity of the GAC
algorithm for the subset sum problem is O(n^2*d^2). This means that
the GAC algorithm does not solve the subset sum problem in polynomial
time.

In general, P ≠ NP means that there are some problems that are thought
to be difficult to solve in polynomial time, but can be verified in
polynomial time once a solution is found. The subset sum problem is
one example of such a problem. Therefore, the existence of a
polynomial time algorithm for achieving generalized arc consistency
for an integer linear equality does not imply that P = NP.


So close, yet oh so very far...
> --
> You received this message because you are subscribed to the Google Groups "Constraints" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to constraints...@googlegroups.com.
> To view this discussion on the web visit https://groups.google.com/d/msgid/constraints/5c3bc662-512f-4ec5-8f62-de7ec17b255fn%40googlegroups.com.



--
Ciaran McCreesh

Eugene Freuder

unread,
Dec 22, 2022, 5:55:45 PM12/22/22
to Ivan Uemlianin, const...@googlegroups.com, van...@uwaterloo.ca

On Dec 22, 2022, at 4:56 PM, Ivan Uemlianin <iv...@llaisdy.com> wrote:

It can certainly talk the talk.  It cites six papers -- are any of them real?  I couldn't find any of them with Google.

Ivan

Fascinating! :-) 

Peter, What do you think of “your” paper? 

Ivan Uemlianin

unread,
Dec 23, 2022, 11:19:42 AM12/23/22
to const...@googlegroups.com
"hallucinate" is quite a good word to describe how ChatGPT generates text generally.  Academic-style citations are nothing special.

Ivan


On 23/12/2022 15:52, Peter van Beek wrote:
The paper it cites with me as a co-author is not an actual paper and I have never done research with the co-author listed. I have heard from others that chatGPT will hallucinate examples, generalizing from the examples it has seen. The hallucinated examples are very plausible but not factual. A bit disturbing in my view that it was released in this form even if they did put a disclaimer on it.

Peter


From: Eugene Freuder <e.fr...@gmail.com>
Sent: Thursday, December 22, 2022 5:55:42 PM
To: Ivan Uemlianin <iv...@llaisdy.com>
Cc: const...@googlegroups.com <const...@googlegroups.com>; Peter van Beek <van...@uwaterloo.ca>
Subject: Re: [constraints] More ChatGPT
 
Reply all
Reply to author
Forward
0 new messages