I'm using the barabasi_albert algorithm for some experiments that I'm
doing, so I tried to make sure that I understood what it did. I came
across some comments in the implementation which suggested that the
coder was not completely sure of the implementation:
repeated_nodes.extend(edge_targets) # add one node for each
repeated_nodes.extend([source]*m) # and new node "source" has
# choose m nodes randomly from existing nodes
# N.B. during each step of adding a new node the probabilities
# are fixed, is this correct? or should they be updated.
# Also, random sampling prevents some parallel edges.
I'm not sure what is meant by "fixed". The code currently updates the
probability each step in the right way. That is, the probability that
a node is chosen as a target becomes higher the more links it has.
Regarding the last statement, random.sample() chooses unique elements
of a sequence. This is correct, as the original paper says:
"at every time step we add a new vertex with m edges that link the new
vertex to m different vertices..."
So I think these comments can be removed.