Participation Projects (Spring 2019)
This class has two goals: to give you a survey of prominent methods and theories in cognitive systems, and to give you a deep dive into one particular problem. At present, the ‘one particular problem’ is Raven’s Progressive Matrices. Over time, we are adding alternative projects to this class so that students may choose to take a deeper dive in other interesting AI areas, like natural language processing, intelligent user interfaces, and automated assistants.
However, the nature of many of these problems is that it isn’t clear if these are even feasible as class projects. With Raven’s Progressive Matrices, we had the benefit of several years of research in our lab to know what was feasible.
So, for new projects, we are going to offer students the opportunity to beta test the projects and give us feedback. This will count as participation credit: after all, the goal of participation credit is to encourage everyone to do something to make the class better for current and future students. And because it’s participation credit, “This is not possible!” is a fair response as long as there is evidence of earnest effort. This also means that no one is required to do these: if you prefer peer reviewing and participating on Piazza, you can ignore these entirely.
Submission Details
There are two groups of participation projects you may choose from during this term; your selection from the first group must be completed by the first participation project deadline, and your selection from the second group must be completed by the second project deadline.
For each project, you should submit two items: the code you wrote and a reflection report you completed.
Participation Project 1 Options
There are two options for the first participation project:
Question Classification
Part of designing an intelligent question-answering agent (like Jill Watson) is the capability to classify questions by category. Toward this end, the following project specification was developed to create an agent that can thus classify questions. Complete the project following these instructions, and submit it to the corresponding assignment in Canvas. Then, complete the reflection outlined below and submit it to the corresponding assignment in Canvas.
Review Score Prediction
The OMS review site OMSCentral.com contains thousands of course reviews, each tagged with a number of different criteria: class, semester, workload in hours, quality (on a scale of 1 to 5), and rigor (on a scale of 1 to 5), as well as a plaintext review response.
Your goal for this project is to write an agent that can predict the workload (number of hours), quality (on a scale of 1 to 5), and rigor (on a scale of 1 to 5) scores for a review based primarily on the plaintext response, but optionally also on the course number and semester. As training data, you may use this archive of approximately half of the available OMS Central reviews.
Your submission should include a Python file called predict_scores.py2 (for Python 2.7) or predict_scores.py3 (for Python 3.X), which when run will take one command-line argument, a filename. The filename will represent a JSON file containing a list of reviews. Each review in this JSON file will have four values: ID, a unique string identifying the review; text, the plain text of the review; course, the course code of the review (which will match the course codes already existing in OMS Central, e.g. “CSE-6242”); and semester, the semester in which the review was written (in YYYY-SS format, where SS is 01 for Spring, 02 for Summer, and 03 for Fall). Your code should write a file called output.json, which will be contain a list of the same reviews with the same keys, as well as three new keys: workload, an integer representing the predicted number of hours per week spent on the course based on that review; difficulty, an integer from 1 to 5 representing the predicted difficulty rating of the course based on that review; and rating, an integer from 1 to 5 representing the predicted quality rating of the course based on that review.
When evaluating your submission, we will run it against reviews that are not in the training set above. Your grade will not be based solely on accuracy, but this will provide us with an indicator of the authenticity of your effort to solve the problem.
Submit this Python file to the corresponding assignment on Canvas. Then, complete the reflection outlined below and submit it to the corresponding assignment in Canvas.
Participation Project 2 Options
The options for Participation Project 2 will be shared by the end of Week 7.
Question Learning
Ideally, when building a question-answering agent (like Jill Watson), the agent should learn from experience. Toward this end, the following project specification was developed to create an agent that can learn information directly from experience. Complete the project following these instructions, and submit it to the corresponding assignment in Canvas. Then, complete the reflection outlined below and submit it to the corresponding assignment in Canvas.
OMSCS Peer Advising Chatbot
Develop a chatbot that is capable of answering common questions asked by incoming or prospective students specifically based on the content of OMS Central (you may use the export from the previous participation project). The chatbot should function as a Python script executable from the command-line. It should be able to answer questions about the following topics:
- Whether a given course requires projects, proctored exams, team projects, etc.
- What programming languages a particular class requires.
- Whether a particular class is best taken on its own or with other classes.
- What amount of time a particular class requires.
- What prerequisites or prior knowledge a student should have before entering the class.
- Whether the class should be taken early or late in the program.
- What classes a new student should consider taking first.
Note the following two things are not what we are looking for:
- We are not looking for a menu-driven text dialog (e.g. “What question would you like to ask? Select a number 1-7 below.”). The user should ask questions in plain text, and your chatbot should be able to understand variations in questions (e.g. “How hard is CS7637?” and “How difficult is KBAI?”
- We are not looking for you to individually codify and summarize OMS Central. We’d like your chatbot to use the actual OMS Central data, not just a summary of it you compile manually.
Submit this Python file to the corresponding assignment on Canvas. Then, complete the reflection outlined below and submit it to the corresponding assignment in Canvas.
Reflection
In addition to submitting the code of your attempt, you must also submit a reflection. The reflection may be up to 5 pages using JDF. Your reflection should contain the following information
- Describe the project to the reader. Assume they have not read the options above: we are interested in how you would describe the project in our own words.
- Describe your approach: how did you attempt to solve the problem?
- Describe your success: how effective were your solutions?
- Reflect on the the project as a whole. If success were an explicit part of the rubric, how much more or less work would this project require than a typical Raven’s Progressive Matrices project? Do you think it would be a fair alternative? Why or why not?
Grading Information
Your submission will be graded on a 30-point scale, which will cover the effort put into your attempt and the detail contained within your reflection. We do not explicitly evaluate the success of your approach, although a successful approach will almost certainly receive full credit for the first part of the grade.
Peer Review
After submission, your assignment will be ported to Peer Feedback for review by your classmates. Grading is not the primary function of this peer review process; the primary function is simply to give you the opportunity to read and comment on your classmates’ ideas, and receive additional feedback on your own. All grades will come from the graders alone.
You receive 1.5 participation points for completing a peer review by the end of the day Thursday; 1.0 for completing a peer review by the end of the day Sunday; and 0.5 for completing it after Sunday but before the end of the semester. For more details, see the participation policy.