Graph edge weights as a function

1,492 views
Skip to first unread message

Dan Homerick

unread,
Jul 27, 2010, 5:18:18 PM7/27/10
to networkx...@googlegroups.com
I've recently ran into a case where I want to specify how edge weights are calculated when I run the traversal algorithm. In my specific case, I'm using the graph to describe a transit network, and I want to be able to find the shortest path based on various criteria: distance, distance/vehicle_speed, a congestion metric, etc.

It occurs to me that a really nice interface for this sort of thing would be if each of the traversal algorithms that use edge weights had an optional "weighting" parameter which takes a function. This is similar to how python lists have a "cmp" parameter for specifying how the sort should be done.

The usage would look something like:

--
G = networkx.Graph()
G.add_edge('Sacramento', 'San Francisco', {"distance":200}) 

v_speed = 65
def travel_time(u, v, attr_dict):
    """
    Calculates the edge weight based on the "distance" attribute and v_speed, where v_speed is included via the closure.
    u: The source node for this edge.
    v: The destination node for this edge
    attr_dict: A dictionary containing the edge attributes.
    """
    return edge_attr["distance"] / v_speed

# Find minimum travel time and the associated path
print networkx.bidirectional_dijkstra(G, 'Sacramento', 'San Francisco', weighting=travel_time)

# Find minimum distance
networkx.bidirectional_dijkstra(G, 'Sacramento', 'San Francisco', weighting=lambda u, v, attr_dict: attr_dict["distance"] )
--

What do you think? Would anybody else find this useful?

 - Dan

Dan Homerick

unread,
Jul 27, 2010, 6:37:15 PM7/27/10
to networkx...@googlegroups.com
On Tue, Jul 27, 2010 at 2:18 PM, Dan Homerick <danho...@gmail.com> wrote:
I've recently ran into a case where I want to specify how edge weights are calculated when I run the traversal algorithm... 
# Find minimum distance
networkx.bidirectional_dijkstra(G, 'Sacramento', 'San Francisco', weighting=lambda u, v, attr_dict: attr_dict["distance"] )
 

I just realized that the second example in my original email (quoted above) could more easily be accomplished via:
networkx.bidirectional_dijkstra(G, 'Sacramento', 'San Francisco', weight="distance" )

I'm not proposing that the weight parameter should go away since in most cases it is sufficient and is simpler to just specify which attribute you want to use as the weight. The weighting function would be an additional keyword parameter, not a replacement.

- Dae

Dan Homerick

unread,
Jul 28, 2010, 1:27:21 AM7/28/10
to networkx...@googlegroups.com
Attached is a .patch file that implements the additional parameter, (called weight_fnc, rather than weighting). It includes a few additions to the unit tests, and passes them.
It's written against the svn checkout.

Unfortunately, my editor automatically trimmed some whitespace from line endings, so the patch looks more extensive than it really is. I've not had occasion to use .patch files much -- can/should I remove the extra "changes" from the file by hand?

Cheers,
 - Dan
weight_fnc.patch

Aric Hagberg

unread,
Jul 28, 2010, 11:16:25 AM7/28/10
to networkx...@googlegroups.com
On Tue, Jul 27, 2010 at 11:27 PM, Dan Homerick <danho...@gmail.com> wrote:
> Attached is a .patch file that implements the additional parameter, (called
> weight_fnc, rather than weighting). It includes a few additions to the unit
> tests, and passes them.
> It's written against the svn checkout.
> Unfortunately, my editor automatically trimmed some whitespace from line
> endings, so the patch looks more extensive than it really is. I've not had
> occasion to use .patch files much -- can/should I remove the extra "changes"
> from the file by hand?

Thanks for the code. I like the idea but would like to hear other
opinions on whether we should add this.

Clearly it adds some extra interesting flexibility in specifying the
'weights' for shortest path calculations. But arguments against adding
such a feature might be
- Performance decrease (not sure how much)
- Already can achieve the same result by looping over all edges and
calculating/adding a new 'weight' attribute (could use whatever name you like)
- Clutters interface to algorithm(s)
- For consistency other algorithms might also need to be changed
(e.g. minimum spanning tree)

Aric

Loïc Séguin-Charbonneau

unread,
Jul 28, 2010, 1:09:19 PM7/28/10
to networkx...@googlegroups.com
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Dan,

In the example you gave,

>> --
>> G = networkx.Graph()
>> G.add_edge('Sacramento', 'San Francisco', {"distance":200})
>>
>> v_speed = 65
>> def travel_time(u, v, attr_dict):
>> """
>> Calculates the edge weight based on the "distance" attribute and v_speed, where v_speed is included via the closure.
>> u: The source node for this edge.
>> v: The destination node for this edge
>> attr_dict: A dictionary containing the edge attributes.
>> """
>> return edge_attr["distance"] / v_speed
>>
>> # Find minimum travel time and the associated path

>> print networkx.bidirectional_dijkstra(G, 'Sacramento', 'San Francisco', weighting=travel_time)


>>
>> # Find minimum distance
>> networkx.bidirectional_dijkstra(G, 'Sacramento', 'San Francisco', weighting=lambda u, v, attr_dict: attr_dict["distance"] )

>> --

you can get the same result by simply dividing the path lengths by v_speed at the end. The path themselves won't change if you just multiply every edge weight by a constant.

That being said, I can think of examples where the weighting function is more complicated. The suggested approach do make sense, but, as Aric said, the same result can be obtained by just assigning a new attribute, say travelTime, to each edge and then running the shortest path algorithm

networkx.bidirectional_dijkstra(G, u, v, weight = "travelTime")

The weight optional argument has been added to the shortest path algorithms to do just that. Moreover, when adding edge attributes, nothing prevents you from using a function, for instance:

import networkx as nx

def f(u, v):
return u*v

G = nx.DiGraph()
G.add_edges_from([(u, v, {'myAttr': f(u, v)}) for u in range(10) for v in range(10)])
nx.bidirectional_dijkstra(G, 3, 9, weight = 'myAttr')

I feel that having an extra weighting argument is not necessary.

One possibility would be to try and generalize the role of the 'weight' argument so that it can a function as well. But again, I think this is redundant with what is already in place.

Loïc

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.10 (Darwin)

iEYEARECAAYFAkxQZD8ACgkQLKWiTxa2C82/kACdGDTgLmmRhZd+0D99UlQJvGX7
Er0AoIOHfJPehIWZn0LtYOOw4MJ1/ke7
=rTtE
-----END PGP SIGNATURE-----

Ben Edwards

unread,
Jul 28, 2010, 2:30:08 PM7/28/10
to networkx-discuss
It might also be a performance slowdown if one isn't careful. Suppose
the weight function you have runs in O(E) time (such as normalizing
all the weights by the max weight). Dijkstra's runs in something like
O(E + NLogN) time. If you pass dijkstra's the O(E) function, suddenly
it runs in O(E^2 + NLogN) time. However if the values are
precalculated and added as attributes it will run in O(E +E +NlogN)=O(E
+NlogN) time. Of course all this could be avoided with careful
programming, but I find myself rarely being a careful programmer.

On Jul 28, 11:09 am, Loïc Séguin-Charbonneau <loicseg...@gmail.com>
wrote:

Dan Homerick

unread,
Jul 28, 2010, 2:57:38 PM7/28/10
to networkx...@googlegroups.com
On Wed, Jul 28, 2010 at 8:16 AM, Aric Hagberg <ahag...@gmail.com> wrote:
Thanks for the code.  I like the idea but would like to hear other
opinions on whether we should add this.

Clearly it adds some extra interesting flexibility in specifying the
'weights' for shortest path calculations.  But arguments against adding
such a feature might be
- Performance decrease (not sure how much)

I'm not sure either, but I would be willing to address this. The implementation I did places an extra branch in a loop, but the same branch is taken every time. If the graph is large enough that performance is an issue, I expect that the CPU's branch predictor will easily remove any performance penalty. If my change did introduce a significant performance penalty, I would be willing to rewrite for better performance.
 
- Already can achieve the same result by looping over all edges and
 calculating/adding a new 'weight' attribute (could use whatever name you like)

More on this below, but this only works well when the edge attributes are fairly static. In the case where the attributes are rapidly changing, and where you are repeatedly routing over the graph with slightly different conditions, decorating the graph with a new weight attribute each time becomes cumbersome.
 
- Clutters interface to algorithm(s)

This is a fair critique, though I think it's an easily understood addition to the interface. It's optional, so the extra complexity doesn't affect the user unless they are interested in using it. It's very similar to the interface for sorting a list, so I think it will be pretty instantly understood by most people.
 
- For consistency other algorithms might also need to be changed
 (e.g. minimum spanning tree)

I would be willing to update MST as well. I definitely agree that every algo that uses graph weights should have the same interface.

----- 
The idea of using a weight_fnc to calculate the weights while iterating over the graph is useful in two cases in particular:
1. When the edge attributes frequently changing, but running shortest path is relatively rare. In this case, the weight_fnc offers a way to lazily calculate the weight, and is a performance optimization.

2. When the edge weight depends on data that is not stored within the graph. This allows a very healthy separation between the graph, with all of it's connectivity information, and some other source of weights. Think of the weights as "business logic" that doesn't need to be stored directly within the graph. In this case it's possible to decorate the graph with the current weights prior to running shortest path, but then you have a (potentially stale) cache of edge weights that are stored in the graph but aren't maintained by the graph.

If you think of the graph as a static object, created once, analyzed, then discarded, then the weight_fnc is never a very compelling addition. If you think of the graph as a data structure that holds some connectivity, to be used in a program in which the weights evolve over time (a common case, I think!), then a weight_fnc becomes much more compelling.

- Dan

Dan Homerick

unread,
Jul 28, 2010, 4:40:41 PM7/28/10
to networkx...@googlegroups.com
On Wed, Jul 28, 2010 at 11:30 AM, Ben Edwards <bjed...@gmail.com> wrote:
It might also be a performance slowdown if one isn't careful. Suppose
the weight function you have runs in O(E) time (such as normalizing
all the weights by the max weight). Dijkstra's runs in something like
O(E + NLogN) time. If you pass dijkstra's the O(E) function, suddenly
it runs in O(E^2 + NLogN) time. However if the values are
precalculated and added as attributes it will run in O(E +E +NlogN)=O(E
+NlogN) time. Of course all this could be avoided with careful
programming, but I find myself rarely being a careful programmer.

I don't see this as an issue with the interface. I think it's fairly obvious that the weight_fnc will be called many times, just as it's obvious that the list.sort() cmp function will be called many times.

In fact, when the weight function is genuinely expensive (as compared to being expensive because you're doing it wrong...) passing in the weight function can be a significant performance boost. If you decorate the graph with the weights, you need to calculate the weight for every edge in the graph. If the algo that you're running doesn't visit every edge, then passing in a weight function may do significantly less work.

 - Dan

Ben Edwards

unread,
Jul 28, 2010, 5:30:20 PM7/28/10
to networkx-discuss
I guess I still fail to see the speed-up. You needn't calculate your
new 'weight decorations' for all the edges. Since each edge has it's
own dictionary you can simple calculate it for the needed edges and
store it in the edge dictionary. Simply check before the edge is used
i.e. G.edge[i][j].has_key('distance'), if not call the function.
Either way you are only calling the weight function when needed. The
way it is now, one calls the function and stores the return value, the
way you suggest, whichever algorithm is at work would calculate the
new weight, use it, and then it would likely be lost. Unless you
suggest the function both calculate a new weight value and store it in
the dictionary, then there is even less distinction.

Moreover I disagree that there is should be a 'very healthy
seperation' between a graph and other data. Most graphs that are used
to model physical systems have characteristics that intuitively should
be attached to nodes and edges. This is what makes the current system
as elegant as it is. The graph stores exactly much data as the user
wants it to. If edges have specific weight attributes, regardless of
how they are calculated, they should be attached to the edges in my
opinion.

I'm not opposed to these kinds of API changes, it just seems to be
there is little benefit, and maybe even a little unpythonic. This
would essentially make two ways to do the same thing, from the zen of
python

"There should be one-- and preferably only one --obvious way to do
it."

On Jul 28, 2:40 pm, Dan Homerick <danhomer...@gmail.com> wrote:

Dan Homerick

unread,
Jul 28, 2010, 5:35:14 PM7/28/10
to networkx...@googlegroups.com
On Wed, Jul 28, 2010 at 10:09 AM, Loïc Séguin-Charbonneau <loics...@gmail.com> wrote:
Dan,

In the example you gave,

>> --
>> G = networkx.Graph()
>> G.add_edge('Sacramento', 'San Francisco', {"distance":200})
>>
>> v_speed = 65
>> def travel_time(u, v, attr_dict):
>>     """
>>     Calculates the edge weight based on the "distance" attribute and v_speed, where v_speed is included via the closure.
>>     u: The source node for this edge.
>>     v: The destination node for this edge
>>     attr_dict: A dictionary containing the edge attributes.
>>     """
>>     return edge_attr["distance"] / v_speed
>>
>> # Find minimum travel time and the associated path
>> print networkx.bidirectional_dijkstra(G, 'Sacramento', 'San Francisco', weighting=travel_time)
>>
>> # Find minimum distance
>> networkx.bidirectional_dijkstra(G, 'Sacramento', 'San Francisco', weighting=lambda u, v, attr_dict: attr_dict["distance"] )
>> --

you can get the same result by simply dividing the path lengths by v_speed at the end. The path themselves won't change if you just multiply every edge weight by a constant.

That being said, I can think of examples where the weighting function is more complicated. The suggested approach do make sense, but, as Aric said, the same result can be obtained by just assigning a new attribute, say travelTime, to each edge and then running the shortest path algorithm

 In trying to pare down my example to be as small as possible, I cut too deep. Here's a better, although still simple, example.

----
class Vehicle(object):
    def __init__(self, desired_speed, origin, dest):
        self.desired_speed = desired_speed
        self.origin = origin
        self.dest = dest
        self.path = []

G = nx.Graph()
G.add_edge('Sacramento', 'San Francisco', {number_of_vehicles:0, max_speed:65, length:200})
...

for vehicle in vehicles_list:
    def weight_fnc(u,v,edge_attr):
        speed = max(vehicle.desired_speed, edge_attr['max_speed'])   # note that for this will be different for every vehicle!
        congestion_rating = calc_congestion(edge_attr['number_of_vehicles'])    # some potentially expensive function

    vehicle.path = nx.some_shortest_path_algo(G, vehicle.origin, vehicle.dest, weight_fnc=weight_fnc)
    G.get_edge_data(vehicle.path[0], vehicle.path[1])['number_of_vehicles'] += 1     # changes an edge attribute
----

As I've said in other posts, not only is this approach relatively easy to understand, it can run quite a bit faster than if every edge needed to have it's weight recalculated since the edge weights may be calculated in a lazy manner.

 - Dan

Christopher Ellison

unread,
Jul 28, 2010, 5:54:59 PM7/28/10
to networkx...@googlegroups.com

I'm in agreement with Dan here, though I could be persuaded otherwise.

If you add the additional weight for every edge, and your weight
function is O(E), then you have O(E^2). If you subsequently run
Dijkstra, your overall time is O(E^2 + NlogN) rather than O(E + NlogN).

Given his most recent example of iterating through a long list of
vehicles, each of which interfaces to the graph differently, it is
definitely a possibility that the graph algorithm might not visit the
same number of edges with each vehicle. In this case, you can
definitely get a speed up.

Chris

Dan Homerick

unread,
Jul 28, 2010, 5:55:07 PM7/28/10
to networkx...@googlegroups.com
On Wed, Jul 28, 2010 at 2:30 PM, Ben Edwards <bjed...@gmail.com> wrote:
I guess I still fail to see the speed-up. You needn't calculate your
new 'weight decorations' for all the edges. Since each edge has it's
own dictionary you can simple calculate it for the needed edges and
store it in the edge dictionary.

How do you know which edges you need until you run the algorithm?
 
Simply check before the edge is used
i.e. G.edge[i][j].has_key('distance'), if not call the function.
Either way you are only calling the weight function when needed. The
way it is now, one calls the function and stores the return value, the
way you suggest, whichever algorithm is at work would calculate the
new weight, use it, and then it would likely be lost. Unless you
suggest the function both calculate a new weight value and store it in
the dictionary, then there is even less distinction.

See the example usage I wrote in response to Loic's post. What if the weight depends not just on the edge attributes, but on other variables external to the graph? The weights can be calculated and stored in the graph, but when the external variable is changed, the weights are not. The graph becomes a stale cache of the true weight, and you have to do more work to either keep the graph's weights up to date whenever the external variable changes, or you have to "flush the cache' and just recalculate all of the weights. You can't just update some of the weights, because you don't know which edges will be needed until you run the algorithm.
 
Moreover I disagree that there is should be a 'very healthy
seperation' between a graph and other data. Most graphs that are used
to model physical systems have characteristics that intuitively should
be attached to nodes and edges. This is what makes the current system
as elegant as it is. The graph stores exactly much data as the user
wants it to. If edges have specific weight attributes, regardless of
how they are calculated, they should be attached to the edges in my
opinion.

That's fine, but there are other use cases. Again, the weights may depend on factors that are logically separate from the graph. 
 
I'm not opposed to these kinds of API changes, it just seems to be
there is little benefit, and maybe even a little unpythonic. This
would essentially make two ways to do the same thing, from the zen of
python

"There should be one-- and preferably only one --obvious way to do
it."

I see your point. Expanding the interface to include a weight_fnc param increases complexity, and should be avoided if there's no compelling reason to add it. I think the reasons to add it are compelling, though, and that they're not just two ways of doing exactly the same thing.

- Dan

Christopher Ellison

unread,
Jul 28, 2010, 6:20:10 PM7/28/10
to networkx...@googlegroups.com

Agreed. In this case, it would not make sense to have the vehicles as
part of the graph. In some sense, you are composing the vehicles with
the graph object and then you are running a graph algorithm on the
"composed" object. The graph structure is distinct and it makes sense
that you would not want to modify it.

Generally, this seems to be in the same spirit of the
networkx.algorithms.attrmatrix.attr_matrix()
function. There, the edge_attr keyword could be a string or a callable.
If it is a callable, it receives the "from" and "to" nodes as inputs
from the algorithm and can calculate any "weight". With closures, one
should be able to make these functions do anything.

I've needed behavior like this before (hence attr_matrix) and it seems
you do currently as well. Perhaps as NetworkX usage continues to grow,
we'll see it requests for this frequently. Its definitely important to
balance functionality with performance and if performance cannot be
addressed (assuming it is even sizable), then maybe we have two
functions (at the cost of code duplication).

Chris

Ben Edwards

unread,
Jul 28, 2010, 8:32:42 PM7/28/10
to networkx-discuss
I figured out a little why I am confused, mostly because the answer to
the question:

>How do you know which edges you need until you run the algorithm?

for dijkstra's algorithm is all of them. Dijkstra's has to check all
edges to ensure that it has the shortest path. For minimum spanning
tree (networkx uses kruskal's algorithm), it's all of them. We have to
have the edges sorted by weight. So we would have to calculate the
weights ahead of time either way, whether you pass a function or do it
yourself resulting in O(E^2 +ElogE)=O(E^2). The only one that I can
tell that might not check all of them is A*, but that takes a
heuristic function, so you could define your weight function there.

Thus in your example, for each car you are still calculating the
weight for each edge, if you are using shortest_path or dijkstra's.
I'll agree this is not space efficient if you store all the vehicles
speeds on every weight, but you could replace them just as easily,
which would give you the same result as what you suggest.

Also Chris, you're right I screwed up in my initial analysis, but I
don't think you have it quite right either. If the weight function
takes O(E) time, then calculating first then running Dijkstra's would
be O(E^2) for the calculation of the weights, then O(E+NlogN) for a
total of O(E^2 +NlogN). If dijkstra's has to make a call to the weight
function each time it needs to access an edge weight, then it must be
O(E*O(E)+NLogN)=O(E^2+NlogN).

Ben

On Jul 28, 4:20 pm, Christopher Ellison <celli...@cse.ucdavis.edu>
wrote:

Christopher Ellison

unread,
Jul 28, 2010, 8:49:52 PM7/28/10
to networkx...@googlegroups.com
Ben Edwards wrote the following on 07/28/2010 05:32 PM:
> I figured out a little why I am confused, mostly because the answer
> to the question:
>
>> How do you know which edges you need until you run the algorithm?
>
> for dijkstra's algorithm is all of them. Dijkstra's has to check all
> edges to ensure that it has the shortest path. For minimum spanning
> tree (networkx uses kruskal's algorithm), it's all of them. We have
> to have the edges sorted by weight. So we would have to calculate
> the weights ahead of time either way, whether you pass a function or
> do it yourself resulting in O(E^2 +ElogE)=O(E^2). The only one that I
> can tell that might not check all of them is A*, but that takes a
> heuristic function, so you could define your weight function there.
>
> Thus in your example, for each car you are still calculating the
> weight for each edge, if you are using shortest_path or dijkstra's.
> I'll agree this is not space efficient if you store all the vehicles
> speeds on every weight, but you could replace them just as easily,
> which would give you the same result as what you suggest.
>

Ahh...yes, I should have thought more about the actual graph algorithm
being used. So perhaps this on-the-fly weight computation is not as
useful for Dijkstra or minimum spanning. Maybe more useful if there
were some algorithms that could quit early.


> Also Chris, you're right I screwed up in my initial analysis, but I
> don't think you have it quite right either. If the weight function
> takes O(E) time, then calculating first then running Dijkstra's
> would be O(E^2) for the calculation of the weights, then O(E+NlogN)
> for a total of O(E^2 +NlogN). If dijkstra's has to make a call to the
> weight function each time it needs to access an edge weight, then it
> must be O(E*O(E)+NLogN)=O(E^2+NlogN).
>

I think we are in agreement. :) Quoting:

> If you add the additional weight for every edge, and your weight

> function is O(E), then you have O(E2). If you subsequently run
> Dijkstra, your overall time is O(E2 + NlogN) rather than O(E +
> NlogN).

Chris

Dan Homerick

unread,
Jul 28, 2010, 9:19:32 PM7/28/10
to networkx...@googlegroups.com
On Wed, Jul 28, 2010 at 5:32 PM, Ben Edwards <bjed...@gmail.com> wrote:
I figured out a little why I am confused, mostly because the answer to
the question:

>How do you know which edges you need until you run the algorithm?

for dijkstra's algorithm is all of them. Dijkstra's has to check all
edges to ensure that it has the shortest path. For minimum spanning
tree (networkx uses kruskal's algorithm), it's all of them. We have to
have the edges sorted by weight. So we would have to calculate the
weights ahead of time either way, whether you pass a function or do it
yourself resulting in O(E^2 +ElogE)=O(E^2). The only one that I can
tell that might not check all of them is A*, but that takes a
heuristic function, so you could define your weight function there.

You're right that with classic Dijkstra -- in which you supply only a initial node and from which you find the length & path to every other node -- you end up calculating the weight for every edge. This is not generally the case for algorithms in which you supply both an initial and a goal node, however. Bidirectional Dijkstra terminates once the search trees intersect, for example. If the initial node and the goal node are near each other, only a small fraction of the graph will be explored. It's here that you will gain performance from using lazy weight evaluation (if your weight function is relatively expensive).
 
Thus in your example, for each car you are still calculating the
weight for each edge, if you are using shortest_path or dijkstra's.
I'll agree this is not space efficient if you store all the vehicles
speeds on every weight, but you could replace them just as easily,
which would give you the same result as what you suggest.

True. But now I'm changing the meaning of the 'weight' attribute every time I run an algorithm on the graph. The graph's 'weight' attribute becomes a scratchpad, to be scribbled over by each new run. I lose the ability to use the graph simultaneously from multiple threads, and with no benefit gained.
 
- Dan

Ben Edwards

unread,
Jul 29, 2010, 12:47:53 PM7/29/10
to networkx-discuss
Dan

You are right that bidirectional Dijkstra's does not necessarily
evaluate all edge weights, but be cautious, it also does not
necessarily find the shortest path, in fact in many cases it does not.
If it is acceptable to not have the shortest path, there are other
algorithms, especially if your graph is embedded in R2 which it seems
to be, which are heuristic and are faster than A* or bidirectional
dijkstra. Netwrokx doesn't have them, but I could be convinced to
write them if people would be interested[1]. However, even for
defined source and destination, you should run it for all possible
destinations and store the result. This is part of the magic of
Dijkstra's we get source to all nodes shortest path in the same time
(asymptotically) as we get the shortest path to a specified target.
This might be especially relevant for what the algorithm you have
partially described above, if the vehicles desired speed does not
change, and the max speed on the edge does not change, you could
change recalculation of shortest path to lookups if they exist.

In your case I can see how concurrency could be an issue, but I might
be obliged to save all the data (ie the weight_func result of each car
on each edge) anyway being that it might be useful for later analysis.
Whether this occurs on the graph or an external data structure seems
to make little difference.

I guess mostly I am trying to weight how much this is an
implementation challenge for a specific application, and how much of
this is a problem because networkx is limited. I am not unconvinced
that in your case this is would be useful, but I am not sure I am
convinced, that it could be executed with the standard API as it is
now at a sizable efficiency loss. I think there is roughly the same
risk of evaluating the weight function multiple times, as there is
evaluating it for unused edges in a given algorithm.

[1] Reach for A* : Effecient Point-to-Point Shortest Path Algorithms.
A.V. Goldberg, Haim Kaplan, and Renato F. Werneck SIAM Workshop on
Algorithms Engineering and Experimentation (ALENEX 06)
http://research.microsoft.com/apps/pubs/default.aspx?id=60764

On Jul 28, 7:19 pm, Dan Homerick <danhomer...@gmail.com> wrote:
Reply all
Reply to author
Forward
0 new messages