/********************************************************************* SIMWORLD Version = 4.0 Beta Teach File Copyright (C) 2002 Matthias Scheutz (mscheutz@nd.edu) Artificial Intelligence and Robotics Laboratory Department of Computer Science and Engineering University of Notre Dame, USA http://www.nd.edu/~mscheutz/ http://www.nd.edu/~airolab/ All rights reserved. This program is licensed and distributed under the terms stated in the accompanying COPYRIGHT file. Last modified: 11-27-02 *********************************************************************/ This file contains instructions how to use the SIMWORLD simulation (which is base on the SIMAGENT toolkit -- see TEACH SIM_AGENT). For a more exercise oriented, quick start tutorial, see TEACH simworld_quickstart. SIMWORLD is an artificial life simulation, which provides functionality for running different interacting agents and objects in a simulated, continuous environment. The agents are controlled by rules written in the POPRULEBASE (see TEACH POPRULEBASE), which is part of the SIMAGENT toolkit. New agent behaviors of agents can be defined without any knowledge of POP11 (the underlying programming language of SIMAGENT and SIMWORLD). It is also possible to extend the simulation in various ways (e.g., add new agents or objects), but this will typically require rudimentary knowledge of and programming skills in POP11. CONTENTS - (Use g to access required sections) -- Introduction -- The SIMWORLD environment -- The basic agents -- -- The exteroceptive sensors -- -- The proprioceptive sensors -- -- The effectors -- -- The control system -- -- The basic rulesystem -- Using SIMWORLD -- -- simulate and continue -- -- EXERCISE 1. -- -- simulatestep and continuestep -- -- Commands to manipulate an agent's database -- -- Commands to manipulate simulation objects -- -- Commands to track slots of simulation objects -- -- How to add your own rules -- -- EXERCISE 2. -- -- Running the simulation with different groups of agents -- Solutions to exercises -- -- Solution 1 -- -- Solution 2 -- Introduction --------------------------------------------------------- This file will teach you how to use the SimWorld package to create and run agents. First move the cursor over the line with POP11 code below and hit " + d" to compile it (remember, " + d" in XVED compiles the line the cursor is on): uses simworld This should have loaded the simulation. To get a feeling for it (before going into any detail), compile the following code and watch what happens (return to this screen once the simulation is over). To compile the code, first mark it by moving the cursor to the first line, pressing "", moving the cursor to the last line, and pressing "". Now compile the range using " + d" (remember, "" starts a range, "" ends a range, and " + d" compiles a marked range). startsimulation([ ;;; parameter list [[quitif 100]] ;;; entity list [[basic_agent [startup 20]] [obstacle [startup 10]]] ;;; resource list [[random [0.25 [plant]] [0.15 [water]]]] ]); You should see two windows, one of which shows the whole simulated environment and the other of which is a magnified shot of an area of the environment, as well as the XVED output window, which will give you some information about the status of the simulation. Once the simulation is completed, you can close the simulation windows using the standard close button, or alternatively, you can compile this: close_environment(); If you wish to run the simulation more slowly do this: startsimulationstep([ ;;; parameter list [[quitif 100]] ;;; entity list [[basic_agent [startup 20]] [obstacle [startup 10]]] ;;; resource list [[random [0.25 [plant]] [0.15 [water]]]] ]); After each cycle of the simulated world the simulation will pause and you will see a "?" prompt in the output.p file window. Move the mouse pointer into that window (or direct keyboard input to that file) and press RETURN to get the next step in the simulation. You can interrupt with -c. When startsimulationstep is running additional commands are available during each pause, described below. -- The SIMWORLD environment -------------------------------------------- Generally, SIMWORLD consists of a potentially unlimited continuous surface (although it is usually restricted to an area of 800 by 800 units). It is populated with various spatially extended objects: - various kinds of agents (circles of different color and size with a black spot indicating their current heading) - static and moving obstacles of varying size (red squared objects) - food and water sources (small green and blue circles) Food and water sources pop up within a particular area and disappear after a pre-determined period of time, if not consumed by agents earlier. Agents are in constant need of food and water. Moving consumes energy and water proportional to their speed, and even if they do not move, they will still consume a certain amount of both. When the energy and/or water level of an agent drops below a certain threshold, the agent "dies" and is removed from the simulation. Agents also die and are removed if they run into other agents or obstacles. After a certain age (measured in terms of simulation cycles), agents reach maturity and can procreate asexually. The procreation age is determined by a global parameter of the simulation, OMEGA. Its default value is set to 250, but it can be set to other values before the simulation is started, e.g., to 300: 300 -> OMEGA; Note that setting OMEGA to a very large number (i.e., larger than the number of update cycles of the simulation) will effectively turn procreation off. Having offspring requires a certain amount of energy and water, which will be subtracted from the parent. Hence, agents have to have a certain minimal amount of energy and water to be able to have offspring. The number of offspring depends on how much energy and water they have. In general, they will have anywhere from 1 to 4 offspring, which pop up in the vicinity of the agent one at a time. There is also a minimum regeneration period during which agents cannot have additional offspring. Simulations in SIMWORLD are run for a predetermined number of update cycles, where each update cycle consists of the following steps: - new objects (such as food and water sources) are created (if required) - the sensory inputs for all agents are determined - the rulesystems of all agents are run - the actions of all agents are attempted as well as actions of other objects (e.g., moving obstacles) - if required, objects are removed from the simulation (e.g., agents that died) - if the simulation is run in stepmode (see the section on "using SIMWORLD"), then the user is prompted for a command The simulation ends when the predetermined number of cycles is reached. In following sections will describe SIMWORLD in greater detail. The SIMWORLD package is by default installed in $poplocal/local/simworld/ It consists of the following files: - simworld.p The startup file, linked to $poplocal/local/lib/simworld.p so that "uses simworld" works in Pop-11, and in the src/ subdirectory: - simworld.p (the basic environment simulation), - simworldGUI.p (the graphical component of the simulation), - basicagents.p (the basic agent definitions), - basicagentsGUI.p (the graphical component of basic agents), - basicrules.p (the basic agent rules), and - basicrulessystems.p (the basic agent rule system). There is also a "teach" subdirectory, containing two files - teach/simworld_quickstart (linked to $poplocal/local/teach/simworld_quickstart), and - teach/simworld (linked to $poplocal/local/teach/simworld) namely the file you are now reading. New agent definition files can be added, see the section on "extending the simulation". -- The basic agents ---------------------------------------------------- Basic agents consist of a simulated body with sensors and effectors, and a control system, which consists of a database (used for short-term and long-term memory, e.g., as "knowledge base") and a rulesystem (used to implement the "agent program"). They are equipped with four exteroceptive sensors: - sonar - smell - touch - vision They also have four proprioceptive sensors: - energy level - water level - speed - heading On the effector side, agents have: - motors for moving forward and backward - motors for turning - a mechanisms for ingesting food and water Basic agents are built in accordance with the SIMAGENT model (see TEACH SIM_AGENT), except that the general SIMAGENT sensory function is redefined for the above sensors for efficiency reasons. -- -- The exteroceptive sensors Sonar is used to detect obstacles and other agents within the given sonar range. You can turn a graphical representation of the sonar range on by setting the global variable SHOWSONAR to "true": true -> SHOWSONAR; You can observe its effect by re-running the above command, but with 20 cycles instead of 100 startsimulation([ ;;; parameter list [[quitif 20]] ;;; entity list [[basic_agent [startup 20]] [obstacle [startup 10]]] ;;; resource list [[random [0.25 [plant]] [0.15 [water]]]] ]); (Showing the sonar range for each agent will slow down the simulation. It can be interrupted with -c.) Turn off the sonar range display with false -> SHOWSONAR; The sonar system computes three "force vectors" O, A, and S for each obstacle, each agent of a kind different than the current agent, and each agent of the same kind as the current agent, respectively, which are the respective sums, scaled by 1/(|v|*|v|), of all vectors v from the agent to the obstacle or other agent, where |v| is the length of the vector v. Hence, O, A, and S are rough indications of the direction in which to expect either obstacles or other agents, where |O|, |A|, and |S| indicate the "density" of the object distribution (i.e., if the object is close by, the vector's length will be large, but it could also be large because there are many objects together at some distance). The smell system works in a similar way to sonar in that it also computes two force vectors F and W, for food and water. The respective smell range can be displayed by setting the global variable SHOWSMELL to "true": true -> SHOWSMELL; and turn it off with false -> SHOWSMELL; All force vectors can be displayed by setting the global variable SHOWVECTORS to "true": true -> SHOWVECTORS; To turn that effect off do false -> SHOWVECTORS; The touch system is used to detect impending collisions (with agents or obstacles) as well as consumable food and water sources. In the basic agent rulesystem it is connected to a reflex mechanism that will make the agent go back and reorient itself. The touch system also provides information about the kind of object touched (e.g., if it is an agent, obstacle, food or water) and where it is located relative to the agent's position. The vision system can identify any object within vision range and determine its size as well as distance and direction relative to the agent. As opposed to smell and sonar (which can detect objects anywhere within their respective ranges), the vision system is restricted to a circular segment in front of the agent. The respective vision range can be displayed by setting the global variable SHOWVISION to "true": true -> SHOWVISION; Try that with startsimulation([ ;;; parameter list [[quitif 20]] ;;; entity list [[basic_agent [startup 20]] [obstacle [startup 10]]] ;;; resource list [[random [0.25 [plant]] [0.15 [water]]]] ]); And turn it off with false -> SHOWVISION; -- -- The proprioceptive sensors The heading sensor is used (like a compass) to indicate the heading of the agent relative to a fixed coordinate system, where 0.0 degrees means "heading east", and positive values (up to 360.0) indicate directions measured counterclockwise from 0.0. For example, a heading of 90.0 means that the agent is headed north, 135 means north-west, etc. The speed sensor is used to measure the current speed of the agent. A speed of 0 means that the agent is not moving at all. The water and energy level sensors indicate the current energy and water levels, which have to be greater than zero for the agent to stay alive. In the basic agent rule system a reflex-like mechanism is triggered when energy or water levels fall below a certain threshold, which will make the agent move towards food or water exclusively. -- -- The effectors Motors are used for turning and for movement. Agents can turn any number of degrees at a time and move a distance identical to their current speed in one update cycle. Note that both energy and water levels decrease quadratically with speed. Hence, moving the same distance d at speed 10 will cost far more than 10 times as much as moving d at speed 1. At the same time, d can be moved in one cycle at speed 10, while it takes 10 cycles at speed 1 to move d units. Obviously, there is an interesting trade-off between speed and time. When agents come within digestion range of a food or water source (as indicated by the touch sensor), they use their ingestion mechanism to ingest food or water. Note that consuming consuming food/water takes time proportional to the amount of energy/liquid stored in the food/water source and on the maximum amount of food/water an agent can take in at any given time. Hence, the control system needs to make sure that the agent won't run off while there is still food or water left if the food/water source is to be consumed completely. -- -- The control system Basic agents are controlled by a system that will repeatedly go through the following sequence of steps: - sensory information is gathered and put in the agent's database - the agent's rulesystem is run - (possibly) actions are performed The behaviors of basic agents are standardly defined by a rulesystem called "basic_agent_rulesystem" (defined in "basicrulesystems.p" and basicrules.p) and can be redefined (see the section on "How to add your own rules" below). Before the rulesystem is run, the following perceptual information will be placed in the agent's database: [collision ] -- can be either %true% or %false% depending on whether the agent is about to collide with an object [obstcollision ] -- can be either %true% or %false% depending on whether the agent is about to collide with a (static or moving) obstacle [agentcollision ] -- can be either %true% or %false% depending on whether the agent is about to collide with an agent of its kind [othercollision ] -- can be either %true% or %false% depending on whether the agent is about to collide with an agent of a different kind [foodvector ] -- and are the coordinates of an agent-centric vector pointing in the direction of food [watervector ] -- and are the coordinates of an agent-centric vector pointing in the direction of water [agentvector ] -- and are the coordinates of an agent-centric vector pointing in the direction of other agents of its own kind [othervector ] -- and are the coordinates of an agent-centric vector pointing in the direction of other agents of other kinds [obstaclevector ] -- and are the coordinates of an agent-centric vector pointing in the direction of static or moving obstacles [ingest food] -- the agent can ingest food [ingest water] -- the agent can ingest water [food ] -- a food source of size has been spotted at distance in direction [water ] -- a water source of size has been spotted at distance in direction [touch ] -- something has been touched at direction [obstacle ] -- an obstacle (moving or static) of size has been spotted at distance in direction [agent ] -- another agent of size has been spotted at distance in direction [other ] -- another agent of size has been spotted at distance in direction [energylevel ] -- is the current energy level [waterlevel ] -- is the current water level [heading ] -- is the current heading relative to the fixed N-E-S-W coordinate system, where 0 is E and negative increments go clockwise (positive increments go counterclockwise) [speed ] -- is the current speed Note that all directions are in degrees (from 0.0 to 360.0). In general, all values are reals. All of these items can be manipulated by rules. Additional items of importance to other functions are: [halting_for eating] -- put this in the database when the agent starts to eat and it will be removed by the eating action (see below) if and only if the agent has reached its maximum energy capacity [halting_for drinking] -- put this in the database when the agent starts to drink and it will be removed by the drinking action (see below) if and only if the agent has reached its maximum water capacity The agent's effectors are controlled by so-called "do" commands of the form [do ] where can be any of the following: eating -- this will make the agent "eat a bite"; once the agent is "full", one [halting_for eating] item will be removed from the database (this can be used to check whether the agent is full) drinking -- this will make the agent "take a sip"; once the agent is "full", one [halting_for drinking] item will be removed from the database (this can be used to check whether the agent is full) map_motor -- adds all five vectors (i.e., W, F, A, S, and O) using predetermined gain values to find a new direction in which to move to go toward food and water and away from other agents and obstacles--this will set a new heading moving -- moves the agent in the current direction at the current speed (unless there is a clause of the form [halting ...] in the database, in which case the action has no effect) turning -- adds degrees to the current heading (positive values will turn the agent counter- clockwise, negative values clockwise) speeding_up -- increases the speed by 1 unit (if the maximum speed is reached, this action has no effect) slowing_down -- decreases the speed by 1 unit (if the current speed is 0, this action has no effect) halting_for -- where is a reason for halting, e.g. "eating", will make the agent halt right away and put the clause [halting_for ] into the agent's database (make sure to remove it eventually, otherwise the agent will not be able to move) Note that the halting action can be used in combination with the eating and drinking actions and the touch sensor to implement eating and drinking behaviors: after checking whether an agent is above a food source (which it can ingest) using the [ingest food] perception, the halting action can be perform using [do halting_for eating] which will add [halting_for eating] to the database. In a next step, the eating action is carried out using [do eating] which will either remove [halting_for eating] from the database (if the agent is full), or will leave it in there, while [NOT ingest food] is true (this is so, because the food sources has been completely consumed). In this case, the agent has to remove the [halting_for eating] item, e.g., by doing [NOT halting_for eating] to be able to move on. Note that all actions are performed after all applicable rules are run in the order in which they are scheduled. Hence, it is up to the user to make sure that these actions will lead to a consistent state of the agent. For example, the following sequence of actions may lead to an unwanted state [do halting_for noreason] [do speeding_up] [do moving] as the agent's speed will be 1, but the agent won't be able to move, since [halting_for noreason] is in its database and blocking the moving action. In general, keep in mind that attempting an action using the [do ...] command does not mean that the action can or will also be performed. Another important thing to keep in mind is that eating, drinking, and moving actions are mutually exclusive, i.e., only one of them can be carried out at the same time in one cycle. -- -- The basic agent rulesystem Basic agents come with a predefined "basic_agent_rulesystem" that consists of the following rulesets: - eating_ruleset - drinking_ruleset - wateralarm_ruleset - energyalarm_ruleset - collidereflex_ruleset - moving_ruleset - cleanup_ruleset The "eating_ruleset" and the "drinking_ruleset" implement "eating and drinking behaviors": if food/water is within ingestion range, then the agent will halt for eating/drinking and continue to eat/drink until either the agent is full or the food/water source is used up. No other behavior can be active while the agent is eating or drinking. The "wateralarm_ruleset" and "energyalarm_ruleset" implement "water and food priority" behavior, which will make the agent go after water/food exclusively if in dire need of either resource. No other behavior can be active while the reflex is active. The "collidereflex_ruleset" implements a reflex that moves the agent away from obstacles or other agents if within "immanent_collision_range" (standardly set to 35). The reflex will reorient the agent until the agent is facing away from whatever it is about to collide with and move it two steps into this new direction. While the reflex is active, no other behavior can be active. Note that the reflex can interrupt any behavior. The "moving_ruleset" implements the basic "foraging" behavior by computing the weighted sum of all five vectors (i.e., the "agentvector", the "othervector", the "obstaclevector", the "foodvector", and the "watervector"). The resultant vector indicates the direction in which the agent will move at the current speed. If the current speed is lower than 5, the agent will speed up until it reaches a speed of 5. The moving behavior is the agent's default behavior and can be interrupted by any other behavior. The "cleanup_ruleset" does not perfrom any external action, but rather removes all current perceptions from the agent's database. This is important as otherwise lots of (probably useless) data will accumulate in the agent's database over time. However, there may be situations where it is necessary to keep old perceptions around (e.g., if a long-term memory is to be implemented), in which case the ruleset can be modified accordingly. Take a look at the specific definitions of the respective rulesets using the "src" command in XVED, i.e., type "ENTER src brt10.p" (note that this is the numeric keypad ENTER). This will open the file "brt10.p", which contains all rule definitions, in a new window. -- Using SIMWORLD ----------------------------------------------------- Running a simulation requires the user to specify certain simulation parameters in advance. Then the simulation can be run using the "simulate" or "continue" commands (the latter is used if a simulation that has finished after a given number of cycles is to be continued for a while). -- -- simulate and continue The "startsimulation" command takes a list of three lists, the parameter list, the entity list, and the resource list. For example, the command used in the beginning startsimulation([ ;;; parameter list [[quitif 100]] ;;; entity list [[basic_agent [startup 20]] [obstacle [startup 10]]] ;;; resource list [[random [0.25 [plant]] [0.15 [water]]]] ]); will start a simulation of 100 cycles with 10 static obstacles, new plant objects created with a 0.25 probability and new water objects created with a 0.15 probability each timestep, and 20 basic agents. The parameter list specifies global settings for the simulation. It is a list of lists whose members can include any of the following: - [quitif { | }]. quitif specifies the conditions under which the simulation should end. It takes as an argument either the number of cycles the simulation should run, or a user-defined procedure that checks for some termination condition. - [output {SCREEN | FILE | NONE}]. output specifies where the results should be directed. It takes as arguments either "SCREEN" to send the results to the screen, "FILE " to send results to the specified file, or "NONE" to suppress output. - [format {ASCII | HTML}]. format specifies the output format. "ASCII" or "HTML" are the only valid formats at this time. - [decimals ]. decimals specifies the number of decimal places () to be displayed. - [setup ]. setup specifies a user-defined setup procedure () to be executed before the simulation begins. This procedure can be used to perform any additional setup that the user requires before simulation. - [seed ]. seed specifies a with which to initialize the random number generator. This is useful for reproducing some observed behavior. The entity list specifies which entities (e.g., agents or obstacles) will be generated in the environment. Each entity specified in the entity list has the format [ [ ...] ...], where is they object type (e.g., basic_agent or moving_obstacle). Supported keywords are: - [startup { & | [ &] ...}]. startup creates the specified number of entities in the simulation. There are two ways of calling startup; & creates entities and applies the optional to all of them, while the [ &] ... form creates one entity for each list, applying the to the new entity and calling the optional on it. The is a list of lists of slots and values to which those slots should be initialized. Hence, [startup [[[sim_x 100][sim_y 200]]]] would create an entity of type with slot sim_x initialized to 100 and slot sim_y initialized to 200 (i.e., an at location (100,200)). - [initial &]. initial overwrites the default initializer for objects of type entity. takes as arguments the newly created object and its parent (or a list of its parents if procreation is sexual). - [offspring [ &] ...]. offspring specifies the kinds of entities into which the offspring could mutate. is the probability with which the offspring may mutate into kind . The optional procedure is executed when the offspring mutates. - [mutate [ [ ] ...] ...]. mutate specifies the mutation behavior for slots of the entity. Each will have its mutation procedure applied with probability . takes two arguments, the entity and the slot. - [record [every [ &] ...] ...]. [record [at [ &] ...] ...]. record specifies slots to be recorded (i.e., the value of the slot will be stored in a file). The every keyword causes the slot to be recorded every cycles. The at keyword causes the slot to be recorded at the time specified, either "birth", "death", or a cycle number. The optional is a procedure that determines the value to be recorded. The resource list specifies periodic events, commonly resource generation. It is a list of lists whose members can include any of the following: - [always ...]. always specifies procedures to be executed every cycle. Each takes as arguments the list of all objects (sim_objects) and the cycle number. - [at [ [ &] ...] ...]. at specifies a kind of entity to be created at cycle , with an optional procedure to be executed as an initializer. - [every [ [ &] ...] ...]. every specifies a kind of entity to be created every cycles, with an optional procedure to be executed as an initializer. - [random [ [ &] ...] ...]. random specifies a probability with which an entity of kind will be created in any cycle, with an optional procedure to be executed as an initializer. All entities are by default placed at random locations within the simulated environment. Alternatively, it is also possible to specify the location of objects as described above. For food and water items as well as agents and static obstacles, only two parameters are needed: [sim_x ][sim_y ] -- the respective and coordinates Note that the center (0,0) is standardly the actual center of the simulation window. For moving obstacles a third parameters is used to indicate the direction, in which the obstacle is moving: [sim_x ][sim_y ][heading ] The whole parameter list passed to "startsimulation" can then be any combination of numeric values for objects or list of parameter lists. The following, for example, creates a simulation just like the above except that it has two agents placed at location (50,50) and (-50,-50), only one static obstacle placed at (0,0) and no moving obstacles: startsimulation([ ;;; parameter list [[quitif 100]] ;;; entity list [[basic_agent [startup [[[sim_x 50][sim_y 50]]] [[[sim_x -50][sim_y -50]]]]] [obstacle [startup [[[sim_x 0][sim_y 0]]]]]] ;;; resource list [[random [0.25 [plant]] [0.15 [water]]]] ]); Try it out! Any simulation that has finished can be continued using the "continue" command, passing as an argument the number of cycles. For example, continue(200); will continue the above simulation for another 200 cycles. Again, simply try it out! -- -- EXERCISE 1. Define a simulation environment for 200 update cycles with static obstacles at (100,-100), (100,100), (-100,100) and (-100,-100), one moving obstacle at (0,0) going northeast, 25 random food and 15 random water sources with generation probability 0.25 each, and finally two agents, one at (-50,0) and one at (0,50). The answer can be found at the end of this document. -- -- startsimulationstep and continuestep In addition to "startsimulation" and "continue", which run the simulation for a fixed number of cycles, the simulation can be run in "step mode" using the "startsimulationstep" and "continuestep" commands (the parameters for these functions are the same as for "startsimulation" and "continue", respectively). In "step mode", the simulation will pause after each update cycle, printing the current cycle number followed by a question mark to allow the user to perform various (debugging) actions: - inspect the current items in the agent's database and add/remove items - inspect and modify any simulation object - track any "slot" of any simulation object You will see something like the following in the output window of XVED: 5 ? This means that the current update cycle is 5 and that the system is waiting for a command. Hitting will run the simulation for another cycle. -- -- Commands to manipulate an agent's database The following commands operate on an agent's database: pdb -- this will print the database of the (where the is the name attached to the graphical representation of the agent) adb -- this will add to the database of ddb -- this will delete from the database of Note that the database will show only those items left after all the eligible rules are run. The next section will show you a way how to print the contents of the database of an agent right after sensing and before any rules has fired. You can get lots of additional information about which rules "fire", the order of "firing", etc. There are global variables defined within POPRULEBASE (such as "prb_chatty" or "prb_walk"), which if set "true" will make the rule interpreter print lots of information about the rules as they are run. For more information, see HELP POPRULEBASE. -- -- Commands to manipulate simulation objects All available simulation objects are defined in the file "basicagents.p" in the src/ subdirectory. This file (similar to the file containing the basic rules) can be viewed with the "src" command in VED (i.e., type "ENTER src basicagents.p"). Each object in SIMWORLD is implemented as an OBJECTCLASS object in POP11 (see TEACH OBJECTCLASS). This means that it contains "slots" (i.e., instance variables) and "methods" (i.e., functions that operate on them). Of particular importance to this section are the "slots", as they define and determine properties of simulation objects. It is the slots of simulation objects that can be inspected and modified in "step mode". The following commands are available: see -- this will print the content of of an set -- this will set the of an to and print the result make -- this will create a new entity of kind del -- this will remove object from the simulation move -- this will move object to location (,) For example, to inspect the age of agent "basic_agent2", type after the prompt: see basic_agent2 age and the sytem will print Slot age: 17 ? (i.e., the agent denoted by "basic_agent2" has age 17). At this point, you can either enter a new command, or hit to run the simulation for another update cycle. To set the energy slot to 3000, for example, do set basic_agent24 energy 3000 The set command is particularly useful to display the various sensory ranges for indiviual agents. The following command, for example, will display the sonar range for agent basic_agent35: set basic_agent35 drawsonarrange true which can be turned off again using set basic_agent35 drawsonarrange false The same works for the slots "drawvisionrange" and "drawsmellrange". If you want to view the database of an agent right after sensing and before any of the rules are run, set the slots "printdatabase" to "true": set printdatabase true (set it to "false" to turn printing off). Note that this function depends on the ruleset "printagentdatabase", hence to be able to use with user-defined rulesystems, make sure that it is included as the first ruleset in the rulesystems (see the section on "how to add your own rules" for details about rulesets and rulesystems. -- -- Commands to track slots of simulation objects Two commands are provided to "track" slots of objects: "ton" and "toff". If tracking is turned on for a particular slot, the slot value will be displayed at every update cycle before the user can enter a command (until turned off). This mechanism is useful to display repeatedly slots of interest without having to retype the respetive "see" commands. Here are the formats of both commands: ton -- turns tracking on for of ton [ ... ] -- turns tracking on for all slots (i.e., , ,..,) toff -- turns tracking off for of toff [ ... ] -- turns tracking off for all slots (i.e., , ,..,) toff -- turns tracking off for all slots of toff -- turns all tracking off Once tracking is turned off (for a slot of an agent), a new "ton" command has to be issued to turn it on. To track the current "otherfactor" (i.e., the sum of all the vectors pointing to agents of a different kind within "sonar_range") for agent basic_agent64, do ton basic_agent64 otherfactor which will display something like this after the next cycle: Object basic_agent64 Slot othervector: {0.45 -0.34} The following command will turn tracking of "energy" and "water" off for agent basic_agent3 (regardless of whether is was actually on): toff basic_agent3 [energy water] Note that dead objects are automatically removed from the tracking list. -- -- How to add your own rules There are different ways in which the "agent program", i.e., the rule system of basic agents, can be changed. (1) Adding rules to the existing rulesystem You can add new rules to the existing rulesystem of the basic agent by simply defining your rulesets and inserting them in the existing "basic_agent_rulesystem". For example, suppose you have written the following ruleset called "planning", which implements a planner: define :ruleset planning; [DLOCAL [prb_allrules = true] [prb_sortrules = false]]; ;;; rules for the planner RULE start_planning ..... RULE find_shortest_path ..... (additional rules) ..... enddefine; Now you want to add it to the basic agent rulesystem. Simply copy the definition of the basic agent rulesystem from the file "basicrulesystems.p" and "include" your ruleset at the place where it should be run, e.g., define :rulesystem basic_agent_rulesystem; cycle_limit = 1; include: printagentdatabase ;;; here comes the new ruleset include: planning include: eating_ruleset include: drinking_ruleset include: energyalarm_ruleset include: coolantalarm_ruleset include: moving_ruleset include: collidereflex_ruleset include: cleanup_ruleset enddefine; Save this definition together with the above definition of your rulesets in a file and load it after you have loaded the SIMWORLD package and it will override the standard definition. Your file would then look like this: uses simworld; ;;; here come your own rule definitions.... ... ;;; here comes the basic_agent_rulesystem definition define :rulesystem basic_agent_rulesystem; cycle_limit = 1; ... enddefine; (2) Changing the standard rulesystem You can change the standard basic agent rulesystem by redefining it in the same way as above with your own rulesets (i.e., you can design an entirely new rulesystem that is not based on the basic rulesystem and specify that agents use that instead). However, you have to make sure that the last ruleset called in your new rulesystem is either the standardly defined: ;;; the cleanup rules, empties the database of all volatile information define :ruleset cleanup_ruleset; [DLOCAL [prb_allrules = true] [prb_sortrules = false]]; RULE cleanup ==> [POP11 lvars ag = sim_data(sim_myself); sim_flush_data([collision ==],ag); sim_flush_data([obstcollision ==],ag); sim_flush_data([agentcollision ==],ag); sim_flush_data([othercollision ==],ag); sim_flush_data([agentvector ==],ag); sim_flush_data([othervector ==],ag); sim_flush_data([obstaclevector ==],ag); sim_flush_data([foodvector ==],ag); sim_flush_data([watervector ==],ag); sim_flush_data([touch ==],ag); sim_flush_data([ingest ==],ag); sim_flush_data([obstacle ==],ag); sim_flush_data([agent ==],ag); sim_flush_data([food ==],ag); sim_flush_data([water ==],ag); sim_flush_data([energylevel ==],ag); sim_flush_data([waterlevel ==],ag); sim_flush_data([heading ==],ag); sim_flush_data([speed ==],ag); ] enddefine; which will effectively delete old entries in the agent's database, or, alternatively, you have to make sure that similar functions are performed by your own rulesets (otherwise the agent's database will grow larger with every update cycle, only storing old, outdated sensory information, which will slow the simulation down and can eventually lead to a "memory limit exceeded" error if there are many agents around). For more information about rulesystems, see HELP RULESYSTEM. -- -- EXERCISE 2. Write a new rulesystem for an agent that will go east regardless of where it is placed and regardless of its initial heading. Test this rulesystem in an environment where the agent is placed at (0,0) and nothing else is in it (neither food nor water will appear), and run it for 150 cycles. For more information about rulesystems, see HELP RULESYSTEM. -- -- Running the simulation with different groups of agents Up to now, you have learned how to use SIMWORLD with one kind of agent at a time. It is also possible to run simulations with different kinds of agents, i.e., with agents controlled by different rulesystems at the same time. To do this, you need to define new agent classes, based on the basic agent provided, but using their own rulesystems. For example, if I wanted to implement the class "aggressive_agent", I would begin by creating a rule that implements the desired behavior (in this case, when I get really close to an object or agent, I want to lunge at it instead of running away): define :ruleset aggressive_collidereflex_ruleset; [DLOCAL [prb_allrules = true] [prb_sortrules = false]]; RULE reflex_reflex [collision %true%] ==> ;;; Speed way up [do speeding_up] [do speeding_up] [do speeding_up] [do speeding_up] [do speeding_up] [do speeding_up] [do speeding_up] [do speeding_up] [do speeding_up] [do speeding_up] ;;; Collide [do moving] [STOP] enddefine; Next, I want to derive the new rulesystem from the basic_agent_rulesystem by renaming the rulesystem and replacing the "safety-oriented" collision ruleset with my "aggressive ruleset": define :rulesystem aggressive_agent_rulesystem; cycle_limit = 1; include: printagentdatabase include: eating_ruleset include: drinking_ruleset include: energyalarm_ruleset include: coolantalarm_ruleset include: moving_ruleset include: aggressive_collidereflex_ruleset include: cleanup_ruleset enddefine; Now I'm ready to define the agent class itself. It's the same as the basic_agent class, except that I substitute the aggressive_agent_rulesystem for the basic_agent_rulesystem: define :class aggressive_agent; is basic_agent; slot sim_rulesystem = aggressive_agent_rulesystem; enddefine; The default agents are repulsed by objects and agents. We can fix this by creating an initialization procedure for aggressive_agents that makes them the opposite: define :method initialize(agent:aggressive_agent,parents); ;;; First call the basic_agent initialize procedure ;;; Always do this, unless you REALLY know what you're doing call_next_method(agent,parents); ;;; Now reset the obstacle and agent factors 100 -> agent.obst_factor; 100 -> agent.same_factor; 100 -> agent.other_factor; enddefine; And, finally, I can test the new class to see that the agents do, indeed, lunge at dangerous agents and obstacles: startsimulation([ ;;; parameter list [[quitif 1000]] ;;; entity list [[aggressive_agent [startup 20]] [obstacle [startup 10]] [plant [startup 20]][water [startup 20]]] ;;; resource list [[random [0.25 [plant]] [0.15 [water]]]] ]); To see how they compete with basic_agents, I can create a simulation that includes agents of both types: startsimulation([ ;;; parameter list [[quitif 1000]] ;;; entity list [[basic_agent [startup 20]] [aggressive_agent [startup 20]] [obstacle [startup 10]] [plant [startup 25]] [water [startup 15]]] ;;; resource list [[random [0.25 [plant]] [0.15 [water]]]] ]); As you can see from the survival of the basic agents, "aggression" rarely pays off :) Note that it is possible to create agents with different rulesystems that belong to the same kind. This is important if different individual agents have different purposes within one kind (e.g., in an "ant" simulation, different agents may assume different roles of ants, while still belonging to the same kind of agents). To do so, simply pass the alternative rulesystem in the initialization list when starting the agents up. Here's a way to make some basic_agents aggressive, even though they're the same agent kind: startsimulation([ ;;; parameter list [[quitif 1000]] ;;; entity list [[basic_agent ;;; Here are the normal basic_agents [startup 20] ;;; Here are the kamikaze basic_agents. Note that in addition to the ;;; rulesystem change, the initialization performed in the kamikaze ;;; initialize procedure is also done here. [startup 20 [[sim_rulesystem aggressive_agent_rulesystem] [obst_factor 100] [same_factor 100] [other_factor 100]]]] [obstacle [startup 10]] [plant [startup 25]] [water [startup 15]]] ;;; resource list [[random [0.25 [plant]] [0.15 [water]]]] ]); If multiple kinds of agents are used it is important to understand the effects of the "kind" parameter on the perceptions of an agents (i.e., on the items placed in the agent's database after sensing and before rules are run). The following items are affected: [agentcollision ] -- can be either %true% or %false% depending on whether the agent is about to collide with an agent of its kind [othercollision ] -- can be either %true% or %false% depending on whether the agent is about to collide with an agent of a different kind [agentvector ] -- and are the coordinates of an agent-centric vector pointing in the direction of other agents of its own kind [othervector ] -- and are the coordinates of an agent-centric vector pointing in the direction of other agents of other kinds [agent ] -- another agent of size has been spotted at distance in direction [other ] -- another agent of size has been spotted at distance in direction The distinction between two kinds of agents can be used, for example, to study different control systems in direct competition with each other (e.g., the fact that one agent kind survives on average more often than other kinds can be taken to imply something about the relative goodness of its control system compared to the other kinds). -- Solutions to exercises --------------------------------------------- Here are the solutions to the above exercises. -- -- Solution 1 The following command will run the simulation as specified (note that many arguments are placed on separate lines): startsimulation([ ;;; parameter list [[quitif 200]] ;;; entity list [[basic_agent [startup [[sim_x 0][sim_y 50]] [[sim_x -50][sim_y 0]]]] [plant [startup 25]] [water [startup 15]] [obstacle [startup [[[sim_x 100][sim_y -100]]] [[[sim_x 100][sim_y 100]]] [[[sim_x -100][sim_y 100]]] [[[sim_x -100][sim_y -100]]]]] [moving_obstacle [startup [[[sim_x 0][sim_y 0][heading 45]]]]]] ;;; resource list [[random [0.25 [plant]] [0.25 [water]]]] ]); -- -- Solution 2 First we define a ruleset that will implement the desired behavior, call it "go_east_ruleset": define :ruleset go_east_ruleset; [DLOCAL [prb_allrules = true] [prb_sortrules = false]]; ;;; increase the speed if it is 0 RULE check_speed [speed ?x] ==> [do speeding_up] ;;; orient the agent so it will always go east RULE go_east [heading ?x] ==> [LVARS [new = -1 * x]] [do turning ?new] [do moving] enddefine; Next we override the "basic_agent_rulesystem" to include our rulesystem as well as the "cleanup_ruleset" from above: define :rulesystem basic_agent_rulesystem; cycle_limit = 1; ;;; here comes our new ruleset include: go_east_ruleset include: cleanup_ruleset enddefine; Finally, we need to specify the right simulation parameters: startsimulation([ ;;; parameter list [[quitif 150]] ;;; entity list [[basic_agent [startup [[[sim_x 0][sim_y 0]]]]]] ;;; resource list [] ]); If your agent heads east off the window too fast, use simulatestep instead: startsimulationstep([ ;;; parameter list [[quitif 150]] ;;; entity list [[basic_agent [startup [[[sim_x 0][sim_y 0]]]]]] ;;; resource list [] ]);