Models Library

164 views
Skip to first unread message

Alan Isaac

unread,
May 13, 2014, 9:54:53 AM5/13/14
to au-ec...@googlegroups.com
Use this thread to discuss models from the NetLogo Models Library.

This week you will experiment with the following: Party model, Traffic Basic model, Wolf-Sheep Predation model.

Alan Isaac

unread,
May 14, 2014, 7:59:37 AM5/14/14
to au-ec...@googlegroups.com
Reminder: In this thread, I should have seen your discussion of the Party model yesterday.  Please include those along with your observations on the Traffic Basic model today.  Please do not fall behind on your discussion assignments (posted this week under the ``Week 1`` thread.  Discussion is an important and graded part of the course.

Thank you,
Alan Isaac

Kevin Carrig

unread,
May 14, 2014, 1:45:56 PM5/14/14
to au-ec...@googlegroups.com
What do you see happening as you run the Party Model? How do you explain it?

The Party Model simulates the dynamics of group interactions in which each gender is assigned a "tolerance" value. The "tolerance" value is understood as a ceiling on the group composition of the other sex. Each gender is able to tolerate being in a group with less than or equal to the tolerance threshold of the opposite sex. If the composition of the group exceeds this value, then an individual will leave the group to join another. 

A simulation of 70 individuals and 10 groups indicates that there is an approximate critical tolerance value at which each group ends up being in a single-sex group. The 55%         tolerance level results in some instances in which our final equilibrium includes mixed-gender groups (note- this integration does not occur in all cases at the 55% level).   However, if we extend to this logic to larger threshold values (ex. 70%), we still observe a large degree of separation between the sexes.  
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
What do Railsback and Grimm say is a "common mistake of beginners"? Does the Party Model avoid that mistake? Is the resulting model useful? If so, how?

Railsback and Grimm emphasize the need for a simplified model when beginning the modeling cycle. The "common mistake of beginners" is over complicating a first-model version with too many factors and interactions. The rationale for this approach is to frame an ABM as an iterative cycle; for each iteration, we can incorporate an additional factor and isolate the effect of that variable on the output of the model and on the effects of the other factors in play. This approach appears analogous to a forward selection stepwise regression. This process also advocates starting with a minimum number of variables and adding subsequent variables in a repetitive process (note: the addition of a variable is dependent on model comparison criterion [AIC, BIC, Adjusted R-Squared])

The Party Model appears to avoid errors in over complication, but this simplicity may take away from the validity of the conclusions drawn from the resulting model. The dimensions of the initial model are relatively simple: an underlying assumption is that all individuals share the same tolerance level (both across and within gender) and all individuals must belong to a group. By relaxing these constraints, and implementing several factors according to the modeling cycle method, we may observe some gains in the quality of the simulation results. Some additional features that may be interesting to include are provided below:
  • We can relax the assumption that each individual will find a new group, and instead introduce a dynamic in which we allow larger groups to subdivide into smaller circles.
  • Tolerance levels may be distinct across gender: perhaps men have a lower "threshold" than women.
  • Introduce a ceiling on the group sizes within the model. Additional thresholds may be incorporated to control for cases where individuals become uncomfortable when a particular group becomes too large.








 


Kevin Carrig

unread,
May 14, 2014, 7:09:35 PM5/14/14
to au-ec...@googlegroups.com
How many ways can we split 5 people into at most 2 groups? How would you determine how many ways can we split 150 people into at most 10 groups?

Integer partition algorithms decompose an integer into a sum of positive integers.Using this method, there are seven ways to split 5 people into at most 2 groups:
  • 5
  • 4+1
  • 3+2
  • 3+1+1
  • 2+2+1
  • 2+1+1+1
  • 1+1+1+1+1
Because integer partitioning is a NP-complete program, to determine the number of ways to split 150 people into at most 10 people is computationally intensive. There are several integer partition algorithms that can be implemented in Python to compute these groupings. A divide and conquer method would work well in this situation. Given a number, n, we can split all partitions into two solutions: (1) the first solution includes the number n itself, and (2) all other solutions that incorporate all integers smaller than n. For every integer, x,  in the second group,we create a subset of integers (including x and all integers less than x) whose sum is n. This could be developed into a recursive algorithm- a problem framed in terms of "itself". 
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Use the default values of 70 individuals and 10 groups, but set the tolerance to say 12%. Now run the model a few times. (Once will probably be enough to see a very odd outcome.) Why does this happen?


A simulation of 70 individuals and 10 groups at a tolerance level of 12% presents a unique outcome- the number of single sex groups and the number of happy individuals converge to zero for most of the process with the final results of the model presented below:
  • Number Happy- 70
  • Single-Sex Groups-8
  • Group Composition- 0,1,41,1,11,2,4,1,0,9
In this case, we observe 43 males congregating into three groups, of which 41 belong to a single group. Females congregate across 5 groups, and we are left with two empty groups in our set of 10. It is interesting to note the two null group in this population as well as the group of 41 that represents roughly ~60 percent of the total population at the party.




Kevin Carrig

unread,
May 14, 2014, 9:15:10 PM5/14/14
to au-ec...@googlegroups.com
Traffic Model

How sensitive are the outcomes to the acceleration settings? How sensitive are the outcomes to the deceleration settings?

To test the sensitivity of the acceleration and deceleration settings, both factors were set to their middle values (0.0050 for acceleration and 0.050 for deceleration) for a 20 car simulation to establish a baseline for our comparison. In this setting, we observe the maximum car speed converge to approximately 0.50 through time period 1450. We then extend the acceleration measure by a factor of 0.5 while holding the deceleration measure constant. In this instance, we observe an increase in the maximum speed to roughly 0.63.Thus, a 0.0025 increase in the acceleration setting results in an approximate 0.10 increase in the maximum speed measurement. The inverse scenario, in which deceleration is increased by a factor of 0.5 and acceleration is held constant, produces a 0.10 decrease in the maximum car speed.Thus, a 0.025 increase in the deceleration setting results in a 0.10 decrease in the maximum car speed. Because both results produce an absolute change in maximum speed of 10 and the increase in acceleration (0.0025) is smaller than the increase in deceleration (0.025), we can conclude that the acceleration settings are more sensitive than the deceleration settings.

What happens if you maximize acceleration and minimize deceleration. Is this a realistic outcome or a limitation of the model? Can you explain it?

The outcome of a model where acceleration is maximized and deceleration is minimized is unique to the simulation. In this case, we realize perfect traffic flow in which no traffic jams occur and the maximum speed converges to its maximum (1). This result is not realistic given exogenous factors not documented in the model. Deceleration can be thought of as a function of several driving 'triggers'- foreseeing stop signs several streets away, observing a large car double parked several streets away, etc. These triggers also extend to highway driving- deceleration may occur when you come closer to your exit, etc. A better model may also incorporate the risk preferences unique to each driver. Some drivers are more inclined to break when approaching the forward car while others are more inclined to "tailgate" the driver ahead of them. To operationalize this factor, we could create several classes of risk-preferences (or a range from 'risk loving' to 'risk averse'). Model estimates may be more refined with the incorporation of these measures unique to each driver as well as the driving 'triggers' discussed above.    

Alan Isaac

unread,
May 14, 2014, 10:18:34 PM5/14/14
to au-ec...@googlegroups.com
Hi Kevin,
Once you have learned enough NetLogo, I hope you will try out some of your proposed changes.  Please ask if you need help.
Cheers,
Alan Isaac
Message has been deleted

Alan Isaac

unread,
May 14, 2014, 10:25:51 PM5/14/14
to au-ec...@googlegroups.com
*Traffic Model*

Good discussion.  Students who have examined Traffic (Basic) will probably enjoy this experiment:
https://www.youtube.com/watch?v=Suugn-p5C1M

Alan Isaac

Natalie Chambers

unread,
May 15, 2014, 7:23:07 AM5/15/14
to au-ec...@googlegroups.com
Similar to the exogenous factors Kevin discussed, I think it is obvious through the traffic model that there would need to be additional factors added to more accurately simulate the flow of traffic. While I was manipulating the factors of the model I couldn´t help but think the effect of an accident on the flow of traffic but in the other direction. Although this model is focused on the flow of traffic without an accident I think it would be important to include the effect of an accident in traffic in the opposite direction which would serve as a "trigger" as you say because drivers often slow down to observe what happened, etc which impedes the flow of traffic. 


Also through the party model it seems that the authors may have made the model too simple. Although I understand the importance of not making a model too complex to begin with, the party model demonstrates that by including only one variable, "tolerance", there in the possibility of having a useless model that doesn´t tell you much about what would actually occur in such settings. 

Alan Isaac

unread,
May 15, 2014, 10:20:26 AM5/15/14
to au-ec...@googlegroups.com
Traffic Model

Natalie propose an interesting extension of the model, which raise the question of how might we most simply incorporate the idea of a distracting accident into the model.  E.g., do we really need to model the other side of the road, or would simpler approaches suffice.

Party Model

When deciding whether a model is "too simple", it is important to ask "for what?"  In particular, it is interesting to ask whether a model that is too simple to realistically represent any actual situation can still be useful.  I suggest that the answer is a big yes.  Indeed, I would argue that most of the core curriculum for an Economics degree illustrates this.  But it is certainly a question worth discussing.

Kevin Carrig

unread,
May 15, 2014, 10:38:24 PM5/15/14
to au-ec...@googlegroups.com
Basic Neighborhood Primitives- Observations

The in-radius function reports those agents in a given agentset whose distance from the caller is less than or equal to n. Coupled with the other features specified in the code example, it seems as if this approach presents several applications in a GIS setting. Buffer zones are represented as vector polygons that a user can build around a set of individual points. In most applications, a user inputs a central point and a radius distance to create a buffer around a point that can take several forms (circle, square, ellipse, etc.) It seems as if building a basic neighborhood in NetLogo follows similar principles. 

The documentation presented does not fully explore how the distance measure is computed when executing the in-radius function. Euclidean distance would make sense if the points (agents) in the environment are projected in a x-y space. I'm curious if there have been extensions of the basic neighborhood function that allow a user to specify a distance measure (i.e. Manhattan distance or some alternative) to apply to the model. We can expect some alternative to be necessary in instances where agents are designed to operate (move) in a nonlinear way.




Kevin Carrig

unread,
May 15, 2014, 11:12:09 PM5/15/14
to au-ec...@googlegroups.com
Wolf-Sheep Predation Model

Agents do not age in this model. In terms of the goals of this model, does that matter? Why or why not?


The goal of the wolf-sheep predation model is to examine the stability of predator-prey ecosystems over time. We observe a simple system where the extinction of one or more species creates instability and long-run population maintenance preserves stability.

Introducing age to the first variation of the model would improve the validity of the results and present an outcome more analogous to a real-world ecosystem. For the wolves in the first model, each additional steps consumes energy and requires each wolf to actively seek out sheep to replenish this energy to avoid death. We can consider incorporating the age of the agents over time to control for changes in metabolic rates of an agent that change as they grow older. An agent's metabolic rate is some measure of the amount of energy consumer per unit of time, all other factors held equal.The addition of this factor would consider that an agent's satiation by consuming a sheep changes with age. Age can also be incorporated in the first model variant to change the fixed probability of reproduction for each wolf at every time step. We must consider that fertility rates in most organisms fall with age, and that there exists some ceiling at which a wolf can no longer successfully reproduce. Incorporating age will account for some expectation that young wolves are more inclined to reproduce than older wolves. 

The addition of an age factor can also be applied to the second model variant in which sheep must eat grass in order to maintain their energy. This approach would also consider the metabolic argument outlined above. In addition, the age of the agents can also effect their ability to consume grass in the model. We should expect younger sheep to be faster than older sheep and better able to reach the next patch of grass as the current grass becomes depleted.   


Alan Isaac

unread,
May 16, 2014, 9:08:54 AM5/16/14
to au-ec...@googlegroups.com
in-radius

Yes, the distance measure is Euclidean distance, the same as used for NetLogo's distance reporter

You can readily write your own reporters when you need a different distance measure, and you can use your distance measure to construct patch sets or turtle sets.  If you wish to experiment with this, feel free to post some code.  I'd be happy to comment on it.

Here is a simple discussion:

Natalie Chambers

unread,
May 16, 2014, 1:54:09 PM5/16/14
to au-ec...@googlegroups.com
Traffic Model
I don´t believe it would be necessary to model the other side of the road because that would unnecessarily complicate the model, however I believe that some form of distraction variable could be added. Thivariable could incorporate various types of distractions and use a scale system to model their effect on the flow of traffic. For example, if we categorized distractions from 1 to 5, if a traffic accident in oncoming traffic were the most distracting, its ranking would be 5 because it would impede the flow of traffic for the highest number of people or for the longest period of time but a distraction like an animal in the road which may cause one person to slow down but otherwise may not affect any other driver would be ranked a 1. By creating this type of dummy variable of sorts, the model would be able to incorporate many exogenous variables that have otherwise been left out of the model. 

Party Model 
Although it is true that economic models do generally over simplify a realistic representation of an actual situation, and I am quite new to modeling so this may be naive but I think that this model could still work while incorporating other "important" variables. 

I have been thinking of ways to support this last comment for sometime since beginning this post and although I have not thought of an example of how to model variables simply, perhaps I can pose it as a question. 

Example: A variable I believe is important for this model is conversation type. Maybe this would be too hard to model but it would be an important reason why people circulate in social settings. If the conversation becomes political maybe more people will join or if the conversation is about popular ideas verses niche ideas would cause people to leave or enter into a conversation.

Would there be a way to model this type of variable?

Alan Isaac

unread,
May 16, 2014, 2:23:56 PM5/16/14
to au-ec...@googlegroups.com
Traffic Model

Adding distractions seems like at interesting approach.  Here is one very simple way you could implement it.  You could give patches a distraction? variable (using patches-own).  Each period each patch could have a low probability of setting distraction? to true.  A car (turtle) will decelerate if on a distraction? = true patch.

Naturally many variants are possible, including setting the size of the distraction and the resulting deceleration.

Alan Isaac

unread,
May 16, 2014, 2:38:53 PM5/16/14
to au-ec...@googlegroups.com
Party Model

The easiest way to add conversation type is probably as a patch variable, where you would set it for patches that are group sites. In the simplest case, it would just be an integer value (e.g., 0,1,2).  You could give each person (turtle) a list of topics they are interested in.  A more interesting approach might be to use bitstrings (e.g., "1011") to represent topic mixes and have persons decide whether or not to move groups based on the Hamming distance from the current group topic mix.

We can discuss this further if you end up with a project that focuses on group dynamics. You may want to take a look at this:

Natalie Chambers

unread,
May 16, 2014, 3:36:47 PM5/16/14
to au-ec...@googlegroups.com
Excellent, Thank you. This is so fascinating, I didn´t even think of bitstrings but that makes a lot of sense. 

Amanda Saville

unread,
May 17, 2014, 4:34:29 PM5/17/14
to au-ec...@googlegroups.com

Party Model

What does the tolerance slider do in the Party Model?  What do you expect to happen if you set tolerance to a large value? Does it?  What do you expect to happen if you set tolerance to a very small value?  Does it?

The tolerance is the main aspect of the model being tested—to what extent can members of the opposite sex remain comfortable in party conversations in which their sex is a minority in the group?  This assumes that women (or men) are only “comfortable” if members of a group conversation are made up of a certain percentage or less of individuals of the opposite sex.  I found that it was not until the tolerance reached about 63% (individuals were “comfortable” in situations where 63% or less of the group was made up of individuals of the opposite sex) that there were more mixed-sex groups than single-sex groups.  If tolerance is set low, however, we would expect there to be more single-sex groups.  I found that this is not necessarily the case.  Instead it seems that it is more in the medium-tolerance range that more single-sex groups emerge.  This obscures the fact, though, that many of the groups (when the model is run with low tolerance) are left with a value of zero (meaning there are no individuals in that particular group at all), and that the small value of single-sex groups are made up of a larger number of individuals.  The few groups that are mixed-sex have much more proportional numbers to one another.

In Chapter 1, how do Railsback and Grimm define a “full-fledged” ABM?  Based on this definition, to what extent would you characterize the Party Model as full-fledged?  Would RG argue that this model should be full-fledged?

A “full-fledged” ABM model, according to Railsback and Grimm, requires that there is differentiation among individual actors in the model, and that all individuals make decisions based on their own preferences and characteristics.  The Party Model is not at all a “full-fledged” ABM model under this definition, as each party goer is assumed to have the same level of tolerance as the next party goer and so on.  There is no differentiation even between the tolerance of males and of females.  This inherently simplifies real-life, in the sense that the level of tolerance of individuals differs based on a number of other factors.  For example, I may be more tolerant of conversations in groups of mostly men because I grew up with five step-brothers than another woman who may feel more comfortable relating to groups made up of primarily women.

Though the Party Model is not a “full-fledged” ABM model, I think that Railsback and Grimm would argue that it should not be.  The value of the model is to explore overall social tendencies, and exceptions to the norms are not necessary to examine the basic goal of the model.  One way that I think the model could be expanded without over-complicating the model in such a way that would undermine its usefulness would be to allow for a certain percentage of men and women to have higher tolerances to control for some individual differentiation.  Additionally, I think that the expansion suggested under the “info” tab to include race or religion would also add a meaningful element to the model.

 

Traffic Model

To what extent would you characterize the model outcomes as emergent? 

Despite the simplicity of the traffic model, with the acceleration and deceleration speeds being the primary tools to manipulate the actions of individual turtles, I would characterize the model outcomes as wholly emergent.  Because in the very basic frame of the model there are no exogenous factors influencing the traffic conditions on the road (the only variables being the number of cars on the road, and the speeds of acceleration and deceleration), the environment of the model is shaped by the actions of the individuals responding to both one another and the greater system itself. 

There have often been times I’ve driven on highways and have experienced these types of traffic jams first hand—stuck for what seems like an eternity in bumper-to-bumper traffic only to reach a point in the road where it seems that the traffic (that had no source that I could tell) began to flow at a normal pace once again.  In this model, agents respond to their changing environment in such a way that not only simultaneously is driven and affects the other agents in the model, but that also affects the overall environment.  To my understanding, this is Railsback and Grimm’s very definition of emergence. 

 

Wolf-Sheep Predation Model

How does Wilensky define “stability” for this model?  Why does adding “grass” to the model matter for the stability of outcomes?

Wilensky defines stability for this model as the self-sustainability of the model (with neither species going extinct) despite constant fluctuation in the respective populations.  After adjusting the sliders in several ways and running the model multiple times in each way with “grass” turned off, it seems that inevitability one (or sometimes both) of the species goes extinct (albeit with varying numbers of ticks).  While the wolf-parameters and sheep-parameters sliders are still important, grass prevents the over-population of either species, which in turn affects the population of the other species.  With more sheep, there are more wolves, until the number of wolves overtakes the population of sheep, which leads to the extinction of both (for instance).  As the population of sheep fluctuates with availability of grass as a factor in addition to the predatory relationship with the population of wolves, the fluctuation in the population of wolves is reactionary—thus stabilizing the overall system. 

Amanda Saville

unread,
May 17, 2014, 4:42:17 PM5/17/14
to au-ec...@googlegroups.com
Kevin, 

I will be honest that when I first read the question about whether age matters or not, I did not think about the fluctuations of metabolic rates or fertility within the species. Though I did consider that the amount of energy for each wolf (or sheep) to take a step would increase with age, I think that the points you have raised are even more compelling reasons to add age as an additional variable to the model. At the same time, it would be interesting to see how valuable such an addition really would be--or if the overall outcomes of the model wouldn't change with the addition of age (assuming that each old wolf would be replaced by one or more offspring).  Perhaps we would find that the stability (or instability) of the model would still depend only on the factors already explored in the model as it currently stands.

Natalie Chambers

unread,
May 17, 2014, 8:51:36 PM5/17/14
to au-ec...@googlegroups.com
Yes I agree with both Amanda and Kevin about the importance of age for the various reasons mentioned. Age would play a large factor in the outcome of the model with different settings. 

One aspect of the model I found particularly fascinating was that changing the reproduction rate for sheep from 6 to 7% with all else remaining equal, caused the extinction of the sheep rather than the wolves. It is interesting that the such a seemingly small change has such a drastic impact on the final outcome of the entire population of a species. This makes me wonder if with each individual control if there is a "tipping  point" at which the other species will "win" or is this unique to reproductive rates. 

Mia Raths

unread,
May 18, 2014, 2:13:24 PM5/18/14
to au-ec...@googlegroups.com
Party:
In the party model, the more people you have and the more groups you make, then the higher the tolerance has to be in order to get all groups of mixed gender.  Again, I know this is a simple model, but it does not really seem to be accurate or even helpful to our understanding of how people act at a party.  There are many factors that are not taken into consideration, such as the fact that there is an unlimited number to the groups that can be formed, some people like larger groups, while others prefer smaller ones, and then there is the fact that at most parties the people will already have formed some positive and some negative relationships that will affect the groups they decide to join.  Obviously there is no way to predict these other factors accurately, but I feel that this model would not be helpful at all to a host.

Wolf-Sheep:

The idea of stability, is very crucial to this particular model.  It is much easier to create a stable model between the sheep and grass, but adding wolves into the equation is much more difficult to do and still maintain a stable ecosystem. Since the sheep are affected by both the grass and the wolf population, they tend to be the least stable variable.  The longer you play it out, the more tickets, the more opportunity there is to maintain a stable model. 

Traffic Model:

I spent a while yesterday playing around with the traffic model and there is one thing that keeps bugging me about it.  We are choosing a particular number of cars to be on the road, but the fact that the same cars leave the grid to the right and then re enter on the left means that it is cyclical.  Cars that caused the back up come back in the other side to be affected by the same back up.  In a situation like this, the number of cars really just means the amount of cars in the space.  The red car is suppose to be the same one, but it makes little sense to me in this context. 
Message has been deleted

Adaora Isichei

unread,
May 18, 2014, 11:58:32 PM5/18/14
to au-ec...@googlegroups.com

The Traffic Basic model demonstrates how basic traffic jams occur without an accident. The criterion for this model is simple, traffic jams occur because of the acceleration and deceleration of drivers. 

This model assumes that drivers are adaptive, because the cars accelerate when no car is in front of them and decelerates when a car is ahead of them.  The outcomes are sensitive to the acceleration and deceleration setting. Interestingly, a traffic jam still occurs when the acceleration and deceleration setting are set on the highest frequency. However a traffic jam does not occur when acceleration is maximized and deceleration is minimized. To make the cars to make a train, I minimized the acceleration and deceleration rate.

Railsback and Grimm define a "full-fledged" ABM as t a model in which agents are different from each other. To make this model more full-fledged I will allow the cars to have varying acceleration and deceleration rates. In real life scenarios, cars tend to move at varying speed. Also an impact is likely to occur if a car moving at a high acceleration rate suddenly decelerates at a high rate.


Adaora Isichei

unread,
May 19, 2014, 12:15:53 AM5/19/14
to au-ec...@googlegroups.com
Hey Natalie,

I agree with both your points. I think the party model was too simple, there are many factors that influence interaction of individuals, such as age, culture, etc. It would be interesting to see how the model turns out when such factors are included but I guess ABM are meant to be simple.
As for the traffic model, I also thought about the impact of an accident on the model. I feel as though the model is unrealistic because drivers do not tend to move at the same speed rate; the agents in the model were highly dependent on each other. Also moving at a high acceleration rate and decelerating abruptly  will most likely cause a collision.

Kevin Carrig

unread,
May 30, 2014, 2:44:32 PM5/30/14
to au-ec...@googlegroups.com
The butterfly model examines the movement of an agent to a neighbor patch based on some probability defined in the setup statement (i.e. set q 0.4). An interesting application of this approach may include some "learning" assignment provided for each agent. The q parameter in the current model defines each turtle to move deterministically to the highest neighborhood patch with a probability of 0.4, and to a randomly chosen neighbor patch with a probability of 0.6. We could build on this approach to increase the value of q (the probability of finding the highest neighborhood patch) with each additional high elevation point reached by the turtle. In this scenario, the butterfly undergoes some learning process where each additional elevation point reached informs the agent's ability to find a subsequent high elevation point. In terms of application, I am still unfamiliar with the syntax to carry out some process to the one outlined above. 

Natalie Chambers

unread,
May 31, 2014, 2:31:52 PM5/31/14
to au-ec...@googlegroups.com
Kevin, I agree I think learning would be a great addition to the model. If the butterfly "learns" the direction that the highest point is generally then it would become more efficient in its search for the next patch to go to. For example, if the butterfly moves west 10 consecutive times then it would "learn" to check the patch to the west first in order to be more efficient in its search. 

Another way this model could be utilized could be for finding water sources. Maybe this is a stretch, but if animals are searching for a water source in the wild they could use the soil moisture content to determine the direction they need to go to find the source. If the animals move to the next patch based on it having a higher soil moisture level then the could eventually move until they find a patch with a water hole or water source. 

Amanda Saville

unread,
May 31, 2014, 2:47:26 PM5/31/14
to au-ec...@googlegroups.com
Kevin, the learning process you propose is an interesting one.  Maybe it would be possible to use similar logic as we did in our mushroom hunts to add some small probability to q when a butterfly successfully chooses the highest neighborhood patch to account for this learning curve.

As for other ideas for simulations using gradient, I found it challenging to think of gradients apart from those in nature.  Similar to elevation, temperature could be another example of a gradient.

Outside nature, I was thinking that maybe a gradient concept could be used to model the amount of political information or mediums of political information individuals are met with on a daily basis through television, advertisements, billboards, and random passersby.  Those who live in cities are heavily bombarded with political information on a daily basis.  Those who live in the suburbs and possibly commute into cities are still met with many different kinds of mediums, but less than those of city dwellers.  This spectrum could go from the one extreme, to those who live in very small and rural towns, where the closest city is several hours away.

What I think would be interesting about this model would be to examine how politically vocal or active residents in different locations would be.  We could of course imagine the very politically involved city dweller, who attends conferences and perhaps even participates in a campaign.  But we could also imagine those living in rural areas being equally as involved and support candidates financially and through other means.  It would be interesting to see if different sides of the spectrum have the same levels of political involvement or not.  If they do have the same levels, then maybe the types of engagement would differ according to location.

Adaora Isichei

unread,
May 31, 2014, 8:49:22 PM5/31/14
to au-ec...@googlegroups.com

So here is my understanding of the [railsback.grimm-2011-pup]_ ch. 3.
 ODD protocol. The authors describe ODD (overview, design concepts and Details) as a guideline for conducting ABMs. ODD encourages developers to adequately think and describe their model.  This protocol can be broken down into the following.

1.     Overview -Purpose, entities, process overview and scheduling,

2.     Design concepts

3.     Details - Initialization, input data, and submodels.

Overview, asks questions such as , what problem is your model trying to address? This question is important because it provides a clear guideline of what should (and shouldn’t) be included in your model.  It also sets expectations for the outcome of the model.

Entities and variables are used to define the model.

Process, overview and scheduling, simply asks what changes are occurring in the model in regards to agents, entities and the environment.

Design concepts, provides an overview of how key elements for the model are implemented in the design. 

Initialization, takes into account the initial set of the model world.

Input data-  (not sure I total understand this aspect but here is my understanding of input data) they are changes that occur within the model. I will appreciate if someone could explain this (Thanks in advance).

Submodels, are major processes in your model.


Kevin Carrig

unread,
Jun 6, 2014, 3:46:44 PM6/6/14
to au-ec...@googlegroups.com
Cellular Automata- Should CA interest social scientists?

Social scientists should be interested in cellular automata if they are comfortable with the qualifications that come with the conversion of these automata systems into models of social systems. One particularly relevant discussion of this application is John H. Miller and Scott E. Page's Complex Adaptive System. CA most benefits those social scientists that want to demonstrate how simple, local interactions among agents can result in interesting aggregate behavior. These models imply that the system being observed includes some set of agents that may be spatially related and operate relative to the actions of their nearest neighbors. Because CA is particularly suited to stochastic processes, time-series events are especially relevant to this level of analysis.Miller and Page present three qualifications that social scientists may face when applying CA to their experiments. I found them interesting and analogous to the assumptions of OLS models. Both sets of qualifications may be violated in the simulation (regression), but there are corrective measures in place to preserve the validity of the output:    

Miller and Page's Qualifications for CA of Social Systems:

1. "We are willing to accept the notion that all agents employ a common,fixed rule." Many models of social systems embody such behavior by assuming a single, "representative" agent in the greater simulation.

2. "Even when all agents begin by using the same rule, mechanisms are still needed to prevent adaptive agents from deviating away from this rule". The authors qualify this condition by noting that models do not have to be confined to static, homogeneous rules. Instead, we can look to heterogeneous and adaptive rules to satisfy this condition.

3. "Another problematic assumption in the preceding model for social situations is that agents myopically apply their behavioral rules to the actions observed last period." This qualification implies that either agents are incapable of remembering and processing more elaborate histories or the actions of the last period are a sufficient statistic of the past.

4. "We need to assume that the timing of behavior in these models, namely, that all agents update their actions simultaneously, is sufficiently close to real systems." Specific timing issues can be addressed in simulations with the use of lags or delays to approximate the real world stochastic process. 
 

Alan Isaac

unread,
Jun 6, 2014, 10:45:12 PM6/6/14
to au-ec...@googlegroups.com
Nicely put.  Does the Schelling segregation model fit this description?
Alan Isaac

Alan Isaac

unread,
Jun 6, 2014, 10:52:31 PM6/6/14
to au-ec...@googlegroups.com
Think of the input data as exogenous variables.  You would specify how they change over time, but these changes would not be determined by the model interactions.  For example, an ecological model might input seasonal temperature variations, or a model of the macroeconomy might input variations in the policy determined interest rate.

hth,
Alan Isaac

Natalie Chambers

unread,
Jun 7, 2014, 10:27:58 AM6/7/14
to au-ec...@googlegroups.com
When experimenting with the Wealth Distribution model I had a few thoughts:

I found it interesting that in almost any combination of all other variables, as long as the number of people was at its maximum (1000) the wealth distribution (CLASS PLOT) went directly to its extremes: very few wealthy and an increasing number of lower class people... and then remained almost stable for the duration of the model. Additionally, when all sliders are maximized but the number of people is small (around 80) the class plot never stabilizes and it takes a much longer time for the classes to diverge to the extremes and then once it does it the classes oscillate quite dramatically. This was interesting because although all other factors remained the same the number of people in the model changed significantly which allowed more money to be accessible to less people and gave the lower class a greater opportunity to attain more money. The gini-coeff also oscillates much more dramatically in this scenario.  However, when loooking at the gini-coefficient I found that it almost was always around .5 for any combination of variables whereas the real gini-coeffs varied widely from .2 to .6. (https://www.cia.gov/library/publications/the-world-factbook/rankorder/2172rank.html)

I think this model could be improved by adding a histogram (similar to the one we were supposed to create in GR) that showed the amount of wealth per person in addition to the one on the number of people in certain class categories. I think this would provide a good picture of the breadth of the wealth inequality because it would create a slope similar to the Lorenz curve but it would visible in intervals of people. 

Also as mentioned in the Things to Try section of the model, I think having a switch for random distribution vs. gradient distribution would be useful. It would increase the possibility that people with a low initial value could get out of the lower class which I think is an important metric.

One of the most interesting combinations of variables I found was when the number of people were maximized, their vision and metabolism were minimized and the num-grain-growth was minimized and all other variables were somewhere in the middle. This created an immediate jump in extremes on the class plat but then around time 525 the differential between the classes shrinks and stabilizes. ( I have attached a photo because it was so interesting). Would love some thoughts on why this sudden and dramatic alteration occurred. 
Wealth.png

Adaora Isichei

unread,
Jun 7, 2014, 10:36:08 AM6/7/14
to au-ec...@googlegroups.com
- Wealth Distribution model (Models Library):
 This model test the Pareto's law, it assumes that majority of wealth is  held by the upper class.
The model starts out with a seemingly equal income distribution, but as time goes on the amount of individuals belonging to middle and upper middle class reduces creating higher income disparity. The interesting revelation in this model is that income disparity is low when life expectancy is low and metabolism rates are small.

Adaora Isichei

unread,
Jun 7, 2014, 11:45:32 AM6/7/14
to au-ec...@googlegroups.com
Should Cellular Automata interest social scientists?

 Social science is the study of the relationship among individuals in a society, so a person's decision is highly dependent on what other people in their society are doing. Cellular automation is an excellent tool used to test the relationships in social sciences, because a change in a cell's
state is solely governed by the state of its immediate neighbors.

The segregration model is a good example of cellular automation,  the models states that people have preferences to associate with individuals similar to oneself; people with similar trait cluster together. Using cellular automation to explain this phenomenon, cells will change it state to associates with neighboring cells that have similar qualities.

The party model is another example of cellular automation.


On Tuesday, May 13, 2014 9:54:53 AM UTC-4, Alan Isaac wrote:
Use this thread to discuss models from the NetLogo Models Library.

This week you will experiment with the following: Party model, Traffic Basic model, Wolf-Sheep Predation model.

Alan Isaac

unread,
Jun 7, 2014, 12:17:19 PM6/7/14
to au-ec...@googlegroups.com
Wealth Distribution

Just to be clear, interest in models that produce such extreme inequality traces to the inequality data:
http://www.ons.gov.uk/ons/resources/pngfigure6_tcm77-295165.png
http://www.newyorker.com/online/blogs/johncassidy/2013/11/inequality-and-growth-what-do-we-know.html

Be sure to distinguish income (flow) and wealth (stock)!

Adaora Isichei

unread,
Jun 7, 2014, 9:09:23 PM6/7/14
to au-ec...@googlegroups.com
Thanks Kevin for this post.

On Friday, June 6, 2014 3:46:44 PM UTC-4, Kevin Carrig wrote:

Amanda Saville

unread,
Jun 9, 2014, 8:35:53 PM6/9/14
to au-ec...@googlegroups.com
The work by Miller and Page Kevin referenced provided some interesting qualifications to a cellular automata.  According to these criteria, I think that Schelling's Segregation Model fits the description for a cellular automaton.  

The only one I'm not quite sure that Schelling's model meets is Miller and Page's #3.  I'm not sure that there even would be adaptive agents in Schelling's model, and if there were what mechanisms would be built in to prevent agents from deviating too far from the rule.  

2. "Even when all agents begin by using the same rule, mechanisms are still needed to prevent adaptive agents from deviating away from this rule". 
 
Additionally, while we do in fact assume that 

 [...] the timing of behavior in these models, namely, that all agents update their actions simultaneously, is sufficiently close to real systems."

as Kevin hinted at by referencing lags to make it models more realistic, I'm not sure that this assumption bodes perfectly with Schelling's model.  As all agents calculate their happiness and decide whether to stay or to move simultaneously, it seems that this model would show a lot more moving than is actually present in reality--especially given that an agent may move to the closest patch satisfying his/her preferred neighborhood conditions, only to find that future neighbors have moved at the same time--thus necessitating an additional move on part of the same agent.


Alan Isaac

unread,
Jun 10, 2014, 10:20:28 AM6/10/14
to au-ec...@googlegroups.com
Moving in the Segregation Model

You are homing in on a difficulty in representing the Schelling model as a classic cellular automaton!

Alan Isaac

Adaora Isichei

unread,
Jun 14, 2014, 10:42:14 PM6/14/14
to au-ec...@googlegroups.com
Hey Amanda, 

I think you basically highlighted the limitation of the netlogo's segregation model. In Schelling's original model agents move to the closed neighborhood that meets it happiness requirement, and not randomly.

On Monday, June 9, 2014 8:35:53 PM UTC-4, Amanda Saville wrote:

Alan Isaac

unread,
Jun 15, 2014, 8:11:33 AM6/15/14
to au-ec...@googlegroups.com
Models Library

Remember, the goal of the models in the Models Library is to be a simple as possible, in the following sense: the model, although simplified, should illustrate core relationships and outcomes of the model of interest.  There are *many* implementations of the segregation model in NetLogo.  The Models Library contains one that is highly simplified.  An interesting question is the following: does the simplified model still make the key points Schelling was interested in?

Kevin Carrig

unread,
Jun 18, 2014, 10:35:38 PM6/18/14
to au-ec...@googlegroups.com
Sugarscape: Can such a highly stylized model help us understand real societies? If so, how?

John H. Miller and Scott E. Page discuss the tenants of the Sugarscape Model in Complex Adaptive Systems: An Introduction to Computational Models of Social Life. Their assessment of this simulation concludes that the real world applicability of the model is rooted in its function as a "fertile playground from which theories can be created, refined, and tested". Computational models offer a valid way to develop new social theories because of their inherent "experimental" qualities- i.e. they are fully observable, recoverable(reproducible), and repeatable. Methods such as random seed contribute to this reproducibility. 

It is particular interesting to note the author's treatment of the Sugarscape model in the context of economic theory. While the "Sugarscape model is often dismissed by economic theorists as lacking useful economic content", this discussion contents that these arguments are misplaced ("especially given the usual lack of timidity in the application of economic theory to other areas"). We can apply this highly stylized model to help us understand real societies because its fundamental concepts are those essential to economic theory: scarcity, choice, and exchange. If we treat Sugarscape as a "test bed for our theories" we can test how well standard theories, developed to explain sophisticated phenomena, operate in the simplified,artificial worlds we create. The results of the Sugarscape model allow us to abstract its concepts rather than directly apply them to the real world. 

Kevin Carrig

unread,
Jun 18, 2014, 11:46:54 PM6/18/14
to au-ec...@googlegroups.com

Sugarscape: Epstein & Axtell’s Immediate Growback model

The second chapter of Growing Artificial Societies touches on the fundamental components of the Sugarscape model and some metrics to assess the outcomes of the simulation. One particular application included a measure of economic inequality called the Gini coefficient and its counterpart, the Lorenzo Curve. This plot is described as "the fraction of total social wealth (or income) owned by a given poorest fraction of the population." If we are hoping to observe the "emergence" of inequality in our model, this seems like a measure worth exploring.

The implementation of the Lorenzo Curve seems straightforward: "one first ranks the agents from poorest to wealthiest", with "each agent's ranking determining its positive along the horizontal axis." Then, for a given agent, its associated plot value is equal to "the total wealth held by the agent and all agents poorer than the agent". My initial thought was to leverage the native wealth distribution histogram in the model. By exporting the plots values, I hoped the output would include a CSV of agent id and their associated wealth value at the end of the simulation. However, the values in the export table presented the components of the histogram itself (i.e. x-min, x-max, y-min,y-max, as well as some indication of bin size). In the absence of the necessary data for the Lorenzo curve, a logical next step may be to present the plot within NetLogo. However, implementing the aggregate wealth counts required of the curve has proven tricky. Does anyone see some intuitive way to add an additional plot that touches on this wealth inequality measure. For reference, the Gini coefficient and Lorenzo curve discussion begins on page 36 of Growing Artificial Societies. 



Alan Isaac

unread,
Jun 19, 2014, 8:24:15 AM6/19/14
to au-ec...@googlegroups.com
Lorenz Curve

I only required you to read the Info tab for the Models Library "Wealth Distribution" model, but if you look at the code, it includes plotting of a Lorenz curve.  You will find related code in my lecture on the Blinder model:
https://subversion.american.edu/aisaac/notes/blinder_slides.xhtml
You can find related Python code in my brief income distribution notes (which are not part of this course):
https://subversion.american.edu/aisaac/notes/incdist.pdf

Alan Isaac

Adaora Isichei

unread,
Jun 21, 2014, 1:48:25 PM6/21/14
to au-ec...@googlegroups.com
Hey Kevin,

I agree with your assessment of the Sugarscape model. The main difference between the Netlogo model and AE's is the final surviving population. The final surviving population is also known as carrying capacity. Agents in this model stop moving when they cannot see any cell with higher sugar level than the cell they are on. This is why an agent's vision is important; agents with low vision tend to die fast, this is why average vision rises and also why agents die off no matter how low you set the population.


On Wednesday, June 18, 2014 11:46:54 PM UTC-4, Kevin Carrig wrote:

Adaora Isichei

unread,
Jun 21, 2014, 2:03:27 PM6/21/14
to au-ec...@googlegroups.com
Hey Professor Issac,

I was wondering if there is a netlogo model for the Concurrent Sexual Partnerships and Primary HIV Infection: A Critical Interaction. I searched through the netlogo library but only found the AIDS model which I think is different.

Matthew Reardon

unread,
Jun 21, 2014, 4:32:10 PM6/21/14
to au-ec...@googlegroups.com
Following on Kevin and Adaora's post, an important real world concept that arises is what happens to the average vision.  Similar to the model I'm working on for the class project (Melitz trade model) we see micro level decisions driving macro level outcomes.  The emergence of an industry wide variable level (vision here, productivity in the Melitz model) without industry wide "rules" for the decision process yields an important real world social outcome; macro level "laws" can become irrelevant if individual decision preferences display a stronger/different preference.  We also saw this in the segregation model, where low individual segregation preferences drove a much higher segregation level.  These models demonstrate an important paradigm for economics; the inclusion of micro foundations in macroeconomic theory 

Alan Isaac

unread,
Jun 21, 2014, 6:33:33 PM6/21/14
to au-ec...@googlegroups.com
Concurrency and HIV

That is a very interesting paper, and the code is available, but it is a mixture of C++ and R.  I wrote an implementation in Python, but not in NetLogo.

Alan Isaac

Natalie Chambers

unread,
Jun 21, 2014, 6:50:15 PM6/21/14
to au-ec...@googlegroups.com
Sugar Scape: Utility Maximization 

From my understand of the Epsteign and Axtell reading and the sugarscape model, the agents are assumed to be utility maximizers. If so, then I think this presents potential issues for the validity of this model. Although it is common argument, particularly in ethics, to assume that agents are utility maximizers (utilitarianism) I think it is important to consider a situation where no all agents are utility maximizers because utilitarianism ignores justice. This applies to this model because it assumes that agents want to maximize their own sugar and only care about themselves. However it is a common argument against utilitarianism that often this is not the case consider perhaps charitable giving by those with good vision and fast metabolisms, to help others that have less natural endowments. 

On another note, initially I was impressed by how well they authors were able to control for the numerous variables that impact this model but as I continued reading about the model I wondered why they didn't chose to have vision and metabolism as static variables throughout the experiment. I think if these two variables were altered the longer that agents "were in the game" then it would be a more realistic representation of the ebbs and flows of the different agents. Although an agent may do very well in the game, and in the current model live forever, having dimensioning vision and metabolism (perhaps randomly) may allow us to better represent a real society. 

Natalie Chambers

unread,
Jun 21, 2014, 7:05:23 PM6/21/14
to au-ec...@googlegroups.com
Carrying Capacity

I think discussing the idea of carrying capacity is an important addition to my previous post. In the book the authors define carrying capacity by explaining that " a given environment will not support indefinite population of agents." A further definition found online that is it a bit more informative is: "The carrying capacity of a biological species in an environment is the population size of the species that the environment can sustain indefinitely, given the food, habitat, water and other necessities available in the environment. In population biology, carrying capacity is defined as the environment's maximal load, which is different from the concept of population equilibrium." In this model the carrying capacity is determined by ability of agents to obtain sugar from the sugar stacks and maintain high energy levels while doing so. Carrying capacity is important because it demonstrates a limit to the population size sustained in the environment. 

It would be interesting to see how having variable metabolism and vision throughout the model for an individual agent would impact that speed at which the model reaches carrying capacity and what the value of the carrying capacity would be. 

Amanda Saville

unread,
Jun 21, 2014, 7:12:08 PM6/21/14
to au-ec...@googlegroups.com
I think Natalie makes a good point by addressing the issue of utility maximization.  I know in the introduction Epstein and Axtell mentioned that one of the later chapters addressed credit lending between agents, and addressing wealth inequality by changing local rules of taxation and inheritance, but I don't recall them mentioning anything about charitable giving or mutual support for those agents that are less "fit."

Just to bring up another aspect of Sugarscape as described by Epstein and Axtell, I thought it was interesting that with the implementation of seasons, "speciation" could occur as those that were less "fit" became migrators, and those that were more "fit" became hibernators.  Though I understand that there wouldn't be any cross-breeding between north and south hibernators, I see no reason why these two groups wouldn't become diluted by mating between migrators and hibernators over time. 

Alan Isaac

unread,
Jun 22, 2014, 9:30:03 AM6/22/14
to au-ec...@googlegroups.com
Utility and Justice

This steps outside the scope of our class, so I will comment very briefly.  Utilitarianism is one approach to a theory of justice, and a very important one.  Historically is has been very progressive.  For example, consider our approach to penal institutions.  If you take a utilitarian approach, you try to produce the best outcome for society.  If you take a  retributivist approach, in contrast, you try to make sure criminals "get what they deserve".  At various times (including the modern era) retributivists have proposed that delinquent debtors deserve debtors prison, adulterers deserve stoning, etc.  The replacement of retributivist motivations with utilitarian reasoning has in this sense been, in my opinion, one of the great advances of civilization.

But by focusing on society, utilitarianism does tend to underplay the individual.  In particular, utilitarian approach to social issues need to be constrained by human rights, and in my view utilitarianism cannot give an adequate account of human rights.

Moving into something less abstract, you seem concerned that individuals care only about their own consumption.  You could change that without giving up utility maximization by adding to an individual's utility function a payoff from her neighbors' average consumption (for example).
Reply all
Reply to author
Forward
0 new messages