due date: deliverables (one zip file) sent to babak at sce dot carleton dot ca by email no later than midnight on Sunday June 17th. Be prepared to demo and talk about your project on the last day of class (Tuesday June 19th).
project: this counts for 50% of your final mark!
- Use one of existing BDI AgentFrameworks to create a RoboCup team. (hint: Jason is the most regularly maintained BDI framework. But you are free to use something else if you wish!)
- this is a group project: teams of 3-4 students. It is ok if more than one team uses the same framework.
- a competition will be held between the teams, while the students comment on the behavior of their players. This will take place on the due date.
- we will use the version of soccer server and monitor that were included in the bundle you used for your assignments.
- the games will be 5 on 5 (one of them may be a goalie if you wish).
deliverables: for each team
- commented source code + easy to execute binaries
- A paper describing the work (your understanding of the framework used, your design and models, tests/evaluation if applicable, and how to run your code), as would be submitted to a workshop or conference. (no more than 3 pages using IEEE brief article style)
- A few slides (no more than 10), as would be used to present the paper at a conference.
evaluation: the following criteria will be used:
- quality of the paper: is it well organized, well written, detailed enough? Are the results presented and discussed in a clear manner? Are the discussions, propositions and conclusions insightful?
- quality of the presentation: is the studied framework well described and detailed? Are the facilities of the framework put to good use?
- quality of the code: is the design clean? Does it crash? Is it well commented? Is it well documented?
- note that quality of the team's play will NOT weigh heavily on the evaluation: as long as the players aren't just standing still or moving aimlessly you're ok!
---
context: see the RoboCup official Web site [3]
due date: deliverables (one zip file) sent to babak at sce dot carleton dot ca by email Friday June 10th. Be prepared to talk about your project on the last day of class (Monday June 13th).
project: this counts for the remaining 50% of your final mark! Program a soccer team according to the RoboCup simulation league rules (see refs below).
- this is a group project: teams of 3-4 students
- a competition will be held between the teams, while the students comment on the behavior of their players. This will take place on the due date.
- version 14 of the soccer server is the one that should be used (and NOT the one called "tutorial version"). See "resources" at the bottom of this page.
- the games will be 5 on 5 (one of them can be a goalie if you wish).
deliverables: for each team
- commented source code
- design and models documentation + short user manual (it should be REALLY EASY to run your code)
- a few slides to describe the approach
evaluation: the following criteria will be judged in decreasing order of importance:
- Approach (is it any interesting?). You can explore one or more of the following things:
Note that knowledge representation must be SEPARATE from the reasoning engine, for example if you are using a state machine approach, your state machine should be stored in some file which can be edited manually (no java!) and that is loaded at start-up.
Simple representations such as decision trees are not acceptable, unless they are generated by a machine learning algorithm. If you are using machine learning, your sample data should be provided and the learned tasks and concepts should also be stored in a separate file that is loaded at startup.
The reason for these restrictions is to make sure that you use the concepts learned in class!
- Quality (will I be able to reuse it?)
- of the presentation
- of the documentation
- of the code (no bugs or crashes)
- code reusability
- of the design
- Performance (is your team any good?). Please note that this is the least important part of your mark. Experience shows that given the timeframe, the more the approach is interesting, the weaker the team is (i.e. you will find out that even beating Krislet is not easy)! So a team that plays too well is either suspect, or is probably not that interesting...
advice:
- try to use Krislet as a starting base, it is easy to understand and extend.
- try to get a running prototype really early in the project (within the first 10 days), so you can get all the major risks addressed early.
suggestions:
- Project idea #1: use Jason (a Java-based BDI agent framework)
- Project idea #2: use Drools or Jess (Java-based expert system engines)
- Project idea #3: use Weka (a machine learning framework)
- Project idea #4: create your own state-machine engine and use a state-machine description language to specify the behavior of your agent, which you then load at start-up time.
- Project idea #5: use a planner
- Project idea #6: use Protege (an ontology editor)
- Project idea #7: create and use your own agent framework by combining various existing components
- Project idea #8: use a Bayesian reasoner
- these are only suggestions, feel free to use any other approach that you find interesting, but then consult with your instructor first!
resources
- Michael Floyd's Powerpoint presentation can be found at [1]
- the server/monitor/krislet zipped bundle that you should use is at [5]
- the official soccer server manuals:
- Known KrisletBugs (some may have been fixed since)
- if you're interested in our RoboCup-related research project and/or want to use our tools for your project: [6]
context: see the RoboCup official Web site
due date: beginning of class on Thursday November 27th, 2008
project: this counts for the remaining 50% of your final mark! The goal of the project is to improve on the performance of our existing Robocup imitation framework
There are many ways to provide such an improvement, however here's a few ideas worth exploring (each idea corresponds to one project):
IDEA#1
Our current imitation framework focuses only on descriptions of the current situation to associate the corresponding action. In other words, it is assuming that the agent being imitated is reactive. Your goal will be to investigate using agent "runs" (recall the definition of agents according to Michael Wooldridge) to allow us to imitate state-based agents.
Here are the tasks associated to this project:
- evaluate how "state-based" a given agent being imitated is. To do so, come up with appropriate metrics that you could apply to the case-base. For example something evaluating the "discernibility rate", i.e. are there situations that are considered similar that yet lead to dissimilar actions? How many? How does that metric evolve when you take into account the previous action of the agent in addition to the present situation? Pushing it even further, what happens to that rate when you take into account the previous action AND the previous situation in addition to the current situation? How far back to we need to take this process until we get fully discernible case-bases for each team under consideration?
- Modify the current distance calculation function to take into account the work done in step 1. Calculate the weight associated with the additional features (i.e. the previous action, etc.)
- Show, quantitatively (using our usual performance metrics) and qualitatively (demonstrating your imitator agent playing against the imitated agent) the improvement you can get. This needs to be demonstrated over MANY different agents to be meaningful and to avoid overfitting.
- Write a paper capturing all of the above, using the guidelines posted by the FLAIRS conference: http://www.flairs-22.info/
More IDEAs will be posted shortly!
deliverables: for each team
- commented source code
- design and models documentation + short user manual
- a few slides to describe the approach
- the paper
resources
- http://www.nmai.ca/research-projects/agent-imitation is the web site for the project-related resources
- IMPORTANT: questions about project descriptions and deliverables should be addressed to Babak. Questions about RoboCup should be addressed to Michael Floyd (mfloyd at sce dot carleton dot ca).
- the server program used should be the one downloadable from our resource page (nmai.ca web site)
context: see the RoboCup official Web site
due date: beginning of class on Tuesday April 1st, 2008
project: this counts for the remaining 50% of your final mark! Program a soccer team according to the RoboCup simulation league rules (see refs below).
- this is a group project: teams of 3-4 students
- a competition will be held between the teams, while the students comment on the behavior of their players. This will take place on the due date.
deliverables: for each team
- commented source code
- design and models documentation + short user manual
- a few slides to describe the approach
evaluation: the following criteria will be judged in decreasing order of importance:
- Approach (is it any interesting?). You can explore one or more of the following things:
Note that knowledge representation must be SEPARATE from the reasoning engine, for example if you are using a state machine approach, your state machine should be stored in some file which can be edited manually (no java!) and that is loaded at start-up.
Simple representations such as decision trees are not acceptable, unless they are generated by a machine learning algorithm. If you are using machine learning, your sample data should be provided and the learned tasks and concepts should also be stored in a separate file that is loaded at startup.
The reason for these restrictions is to make sure that you use the concepts taught in class!
- Quality (will I be able to reuse it?)
- of the presentation
- of the documentation
- of the code (no bugs or crashes)
- code reusability
- of the design
- Performance (is your team any good?). Please note that this is the least important part of your mark. Experience shows that given the timeframe (1 month), the more the approach is interesting, the weaker the team is (i.e. you will find out that even beating Krislet is not easy)! So a team that plays too well is either suspect, or is probably not that interesting...
suggestions:
- try to use Krislet as a starting base, it is easy to understand and extend.
- Project idea #1: use Jason [4] (a Java-based BDI agent framework)
- Project idea #2: use Drools or Jess (Java-based expert system engines)
- Project idea #3: use Weka (a machine learning framework)
- Project idea #4: create your own state-machine engine and use a state-machine description language to specify the behavior of your agent, which you then load at start-up time.
- Project idea #5: use a planner
- Project idea #6: use Protege (an ontology editor)
- Project idea #7: create and use your own agent framework by combining various existing components
- Project idea #8: use a Bayesian reasoner
- these are only suggestions, feel free to use any other approach that you find interesting, but then consult with your instructor first!
ressources
- IMPORTANT: questions about project descriptions and deliverables should be addressed to Babak. Questions about RoboCup should be addressed to Michael Floyd (mfloyd at sce dot carleton dot ca).
- the server program used should be the one downloadable from our resource page (see refs below).
- our own RoboCup specific web site, where you should download the Soccer Server, our improved version of Krislet and various other resources resources: http://chat.carleton.ca/~mfloyd/robocup/
- KrisletBugs
context: see the RoboCup official Web site: http://www.robocup.org and Kevin Lam's presentation [2]
due date: beginning of class on Thursday November 30th, 2006
project: this counts for 50% of your final mark!
- this is a group project: teams of 3-4 students;
- one of the deliverables for the project is a RoboCup team;
- a competition will be held between the teams, while the students comment on the behavior of their players. This will take place on the due date.
deliverables: for each team
- A paper describing the work, as would be submitted to a workshop or conference.
- Commented source code
- A few slides, as would be used to present the paper at a conference.
projects: (read the full description by clicking on the project link)
- Use an agent framework to create a RoboCup team. The two frameworks to choose from are Jack [3] and Jason [4].
evaluation: the following criteria will be used:
- quality of the paper: is it well organized, well written, detailed enough? Are the results presented and discussed in a clear manner? Are the discussions, propositions and conclusions insightful?
- quality of the presentation: is the studied framework well described and detailed? Are the facilities of the framework put to good use?
- quality of the code: is the design clean? Does it crash? Is it well commented? Is it well documented?
- note that quality of the team's play will not weigh heavily on the evaluation: as long as the players aren't just standing still or moving aimlessly you're ok!
ressources
- IMPORTANT: questions about project descriptions and deliverables should be addressed to Babak. Questions about RoboCup should be addressed to Michael Floyd (mfloyd at connect dot carleton dot ca).
- the server program used should be the one downloadable from our resource page (see refs below).
- our own RoboCup specific web site, where you should download the Soccer Server, our improved version of Krislet and various other resources resources: http://chat.carleton.ca/~mfloyd/robocup/
- our old RoboCup web site, contains links to some old projects, some of which might not work with more recent versions of the Soccer Server: http://www.sce.carleton.ca/netmanage/robocup/
- KrisletBugs
context: see the RoboCup official Web site
due date: beginning of class on Tuesday March 30th, 2004
project: this counts for the remaining 50% of your final mark! Program a soccer team according to the RoboCup simulation league rules (see refs below).
- this is a group project: teams of 3-4 students
- a competition will be held between the teams to evaluate the performance of the teams, while the students comment the behavior of their players. This will take place on the due date.
- the server program used will be the Windows version, downloadable from our Vault (see refs below).
- the clients can make use of the Krislet program also available from our Vault.
deliverables: for each team
- commented source code
- design and models documentation + short user manual
- a few slides to describe the approach
evaluation: the following criteria will be judged in decreasing order of importance:
- Approach (is it any interesting?). You can explore one or more of the following things:
Note that knowledge representation must be SEPARATE from the reasoning engine, for example if you are using a state machine approach, your state machine should be stored in some file which can be edited manually (no java!) and that is loaded at start-up.
Simple representations such as decision trees are not acceptable, unless they are generated by a machine learning algorithm. If you are using machine learning, your sample data should be provided and the learned tasks and concepts should also be stored in a separate file that is loaded at startup.
The reason for these restrictions is to make sure that you use the concepts taught in class!
- Quality (will I be able to reuse it?)
- of the presentation
- of the documentation
- of the code (no bugs or crashes)
- code reusability
- of the design
- Performance (is your team any good?). Please note that this is the least important part of your mark. Experience shows that given the timeframe (1 month), the more the approach is interesting, the more the program stinks (i.e. you will find out that even beating Krislet is not easy)! So a team that plays too well is either suspicious, or is probably not that interesting...
my suggestions: (see our Vault for further explanation)
- try to use Krislet as a starting base, it is easy to understand and extend.
- you can also use some of the tools that my students have already developed (the advantage is that you get on-site support!):
- Stripslet for planning
- IAS, Classifier, LogServer for machine learning...
- the Bigus book comes with useful code for expert systems, machine learning, ...
- Weka [1] is a good machine learning framework
- these are only suggestions, feel free to use any other approach that you find interesting!
refs
Most importantly, the downloads are at: http://www.sce.carleton.ca/netmanage/robocup/Downloads
TrustProject
(last edited March 14, 2019)
Find Page by browsing or searching