Note: Beginning in Spring 2024, all course information—including syllabi, assignment descriptions, and supplementary course pages—is delivered via Canvas. For quick-reference as well as for public access, however, we have generated the following export of that content. Note that some of these links point to content within Canvas; if you are a student in the class, these links should take you to the appropriate in-Canvas content. If you are not a student, these links will not work; however, you can find the same content elsewhere here on this page.

Quick links to content within this page:

Syllabus

CS7637: Knowledge-Based AI

This page provides information about the Georgia Tech OMS CS7637 class on Knowledge-Based AI relevant only to the Spring 2024 semester. Note that this page is subject to change at any time. The Spring 2024 semester of the OMS CS7637 class will begin on January 8, 2024. Below, find the course’s calendar, grading criteria, and other information. For more complete information about the course’s requirements and learning objectives, please see the general CS7637 page.

Quick Links

To help with navigation, here are some of the links you’ll be using frequently in this course:

Course Calendar At-A-Glance

Below is the calendar for the Spring 2024 OMS CS7637 class. Note that assignment due dates are all Sundays at 11:59PM Anywhere on Earth time.

Week #Week OfLessonsDeliverableAssignment Due Date
101/08/202401, 02Start-of-Course Survey01/14/2024
201/15/202403, 04RPM Milestone 101/21/2024
301/22/202405, 06Mini-Project 101/28/2024
401/29/202407, 08Homework 102/04/2024
502/05/202409Mini-Project 2, Quarter-Course Survey02/11/2024
602/12/202410, 11RPM Milestone 202/18/2024
702/19/202412Exam 102/25/2024
802/26/202413, 14Homework 203/03/2024
903/04/202415, 16Mini-Project 3, Mid-Course Survey03/10/2024
1003/11/202417, 18RPM Milestone 303/17/2024
1103/18/202419, 20Mini-Project 403/24/2024
1203/25/202421, 22Homework 303/31/2024
1304/02/202423, 24Mini-Project 504/07/2024
1404/08/202425RPM Milestone 404/14/2024
1504/15/2024Final RPM Project04/21/2024
1604/22/2024Exam 204/28/2024
1704/29/202426End-of-Course Survey, CIOS Survey05/05/2024

Given above are the numeric labels for each lesson. For reference, here are those lessons’ titles, with the estimated time to complete each lesson in minutes in parentheses:

  • 01: Introduction to Knowledge-Based AI (45)
  • 02: Introduction to CS7637 (60)
  • 03: Semantic Networks (60)
  • 04: Generate & Test (30)
  • 05: Means-Ends Analysis (60)
  • 06: Production Systems (60)
  • 07: Frames (45)
  • 08: Learning by Recording Cases (30)
  • 09: Case-Based Reasoning (60)
  • 10: Incremental Concept Learning (60)
  • 11: Classification (45)
  • 12: Logic (90)
  • 13: Planning (75)
  • 14: Understanding (30)
  • 15: Commonsense Reasoning (60)
  • 16: Scripts (30)
  • 17: Explanation-Based Learning (45)
  • 18: Analogical Reasoning (60)
  • 19: Version Spaces (60)
  • 20: Constraint Propagation (45)
  • 21: Configuration (45)
  • 22: Diagnosis (45)
  • 23: Learning by Correcting Mistakes (45)
  • 24: Meta-Reasoning (30)
  • 25: Advanced Topics (60)
  • 26: Wrap-Up (30)

Course Assessments 

Your grade in this class is generally made of five components: three homework assignments, five mini-projects, one large project, two exams, and class participation. Final grades will be calculated as an average of all individual grade components, weighted according to the percentages below. Students receiving a final average of 90 or above will receive an A; of 80 to 90 will receive a B; of 70 to 80 will receive a C; of 60 to 70 will receive a D; and of below 60 will receive an F. We do not plan to have a curve. It is intentionally possible for every student in the class to receive an A. 

Homework (15%) 

You will complete three homework assignments in this course—Homework 1, Homework 2, and Homework 3—each worth 5% of your average. Each homework assignment will have one question, which you will answer in a maximum of 5 pages. These questions will cover the course material and explore your understanding of key concepts from the lectures. All assignments should be writtenusing JDF 

Mini-Projects (30%) 

You will complete five mini-projects in this course—Mini-Projects 1, 2, 3, 4, and 5, each worth 6% of your average. Each mini-project asks you to implement some AI logic shown in the course lectures, although you are also welcome to attempt to solve the problems using other techniques. For each of the mini-projects, you will also provide a short write-up of your approach, mainly to share with classmates and look through others’ approaches. These write-ups should be written using JDF. You’ll submit the write-ups to Canvas and the code to Gradescope. Note that your write-up grade and your performance grade will be posted to separate categories on Canvas; each will show as being worth 15% to add to 30% for mini-projects as a whole.

Raven’s Project Milestones (15%) and Raven’s Final Project (15%) 

The semester-long project is the Raven’s project, where you will write an agent that can solve problems on the Raven’s Progressive Matrices test. For the project, you will complete four milestones (1, 2, 3, and 4) throughout the semester, and then a final submission. The four milestones together are worth 15% of your average, and the final submission is worth another 15%. The milestones are there to ensure that you get started on the project early and have an opportunity to see your classmates’ approaches. Each milestone, as well as the final project submission, is graded half on performance and half on a written report. These write-ups should be written using JDF. You’ll submit the write-ups to Canvas and the code to Gradescope. Note that your write-up grade and your performance grade will be posted to separate categories on Canvas; each will show as being worth 7.5% (for milestone performance, milestone journals, project performance, and project journals) to add to 30% as a whole.

Exams (15%) 

You will take two proctored exams in this class—Exam 1 and Exam 2—each worth 7.5% of your average. Each exam is 90 minutes long with up to 25 questions, all multiple-choice, multiple-correct with five choices and between 1 and 4 correct answers. Partial credit is awarded. Each exam will cover all lectures through the current week (for example, Exam 1 covers lessons 01 through 12). All exams are open-book, open-note, open-internet: everything except live interaction with another person. The tests are digitally proctored. Tests will open at least one week prior to the deadline, though they may be open earlier. For more information, check out the About the Exams page.

Class Participation (10%) 

One of the major strengths of large online classes it the way they allow students to have significant impact on their classmates’ experiences. As such, 10% of your class grade and 10% of the time you spend on this class will be improving the course experience for other students. This is class participation credit, and it can be earned in various ways: participating on the class forum; participating in peer review; submitting annotated bibliographies for the course resources; submitting candidate exam questions; participating in other activities; completing course surveys; completing the secret survey by clicking the hidden link here before the end of week 2 to indicate you read the entire syllabus; and more. There may be other mechanisms to earn participation points announced throughout the semester; check the course forum for that!

Course Policies

The following policies are binding for this course. 

Official Course Communication 

You are responsible for knowing the following information: 

  1. Anything posted to this syllabus (including the pages linked from here, such as the general course landing page).
  2. Anything emailed directly to you by the teaching team (including announcements via the course forum or Canvas), 24 hours after receiving such an email. 

Generally speaking, we will post announcements via Canvas and cross-post their content to the course forum; you should thus ensure that your Canvas settings are such that you receive these announcements promptly, ideally via email (in addition to other mechanisms if you’d like). Georgia Tech generally recommends students to check their Georgia Tech email once every 24 hours. So, if an announcement or message is time sensitive, you will not be responsible for the contents of the announcement until 24 hours after it has been sent. 

Note that this means you won’t be responsible for knowing information communicated in several other methods we’ll be using. You aren’t responsible for knowing anything posted to the course forum that isn’t linked from an official announcement. You aren’t responsible for anything said in Slack or other third-party sites we may sometimes use to communicate with students. You don’t need to worry about missing critical information so long as you keep up with your email and understand the documents on this web site. This also applies in reverse: we do not monitor message boxes in Canvas, and we may not respond to direct emails. If you need to get in touch with the course staff, please post privately to the course forum (either to all Instructors or to an instructor individually).

Communicating with Instructors and TAs 

Communication with the course teaching team should be handled via the discussion forum. If your question is relevant to the entire class, you should ask it publicly; if your question is specific to you, such as a question about your specific grade or submission, you should ask it privately. 

Our workflow is to regularly filter the forum for Unresolved posts, which includes top-level threads with no answer accepted by the original poster, as well as mega-threads with unresolved follow-ups. If your question requires an official answer or follow-up from an instructor or teaching assistant, make sure that it is posted as either a Question or as a follow-up to a mega-thread, and that it is marked Unresolved. Once an instructor or TA has answered your question, it will automatically be marked as Resolved; if you require further assistance, you are welcome to add a follow-up, but make sure to unmark the question as Resolved in order to make sure that it is seen by a member of the teaching team. 

Similarly, in order to keep the forum organized, please post as a Post or Note instead of a Question if your question does not require an official response from the teaching team. For example, if you are interested in getting multiple perspective from classmates, getting feedback on your ideas, or having a discussion that does not have a single answer, please use Post or Note instead of Question. Please reserve Question threads for questions that will likely have a single official response. TAs and instructors will regularly convert Questions to Posts or Notes that do not need a single official answer, but it will save time and allow them to focus their attention on other students if you correctly categorize your post in the first place. 

Late Work 

Running such a large class involves a detailed workflow for assigning assignments to graders, grading those assignments, and returning those grades. As such, work that does not enter into that workflow presents a major delay. We have taken steps to limit as much as possible the need to ever submit work late: we have made the descriptions of all assignments available on the first day of class so that if there are expected interruptions (such as like weddings, business trips, and conferences), you can complete the work ahead of time. If you have technical difficulties submitting the assignment to Canvas by the deadline, post privately to the course forum immediately and attach your submission. Then, submit it to Canvas as soon as you can thereafter. 

If due to a personal emergency, health emergency, family emergency, or other unforeseeable life event you find you are unable to complete an assignment on time, please post privately to the course forum with information regarding the emergency. Depending on your unique situation, we will share guidance on how to proceed; if the emergency is projected to delay a significant quantity of the work required for the class, we may recommend withdrawing and reattempting the class at a later date. If the emergency will likely only impact a small amount of the course, we may be able to accept the work late as a one-time exception. If the emergency takes place once you have already completed a significant fraction of the coursework, we may offer an Incomplete grade to allow you to finish the class after the semester is over. 

Note that depending on the nature and significance of the request, we may require documentation from the Dean of Students office that the emergency is sufficient to justify offering an incomplete grade or accepting late work. Note also that regardless of the reason, we also cannot promise any particular turnaround time for grading work that was approved to be submitted late; it may be that grades and feedback will not be returned before the end of the term, and it may be that a temporary grade of Incomplete must be entered to leave time to grade work that was accepted late. 

If you are not comfortable sharing with us the nature of an emergency, or if you need more comprehensive advocacy, we ask you to go through the Dean of Students’ office regarding class absences. The Dean of Students is equipped to address emergencies that we lack the resources to address. Additionally, the Dean of Students office can coordinate with you and alert all your classes together instead of requiring you to contact each professor individually. The Dean of Students is there to be an advocate and partner for you when you’re in a crisis; we wholeheartedly recommend taking advantage of this resource if you are in need. You may find information on contacting the Dean of Students with regard to personal emergencies here: https://studentlife.gatech.edu/request-assistance

Academic Honesty 

All students in the class are expected to know and abide by the Georgia Tech Academic Honor Code (Links to an external site.). Specifically for us, the following academic honesty policies are binding for this class: 

First, for essays, journals, and reports:

  • In written essays, all sources are expected to be cited according to APA style. When directly quoting another source, both in-line quotation marks, an in-line citation, and a reference at the end of the document are required. When directly summarizing another source in your own words, quotation marks are not needed, but an in-line citation and reference at the end of your document are still required. You should consult the Purdue OWL Research and Citation Resources for proper citation practices, especially the following pages: Quoting, Paraphrasing, and Summarizing, Paraphrasing, Avoiding Plagiarism Overview, Is It Plagiarism?, and Safe Practices. You should also consult our dedicated pages (from another course) on how to use citations and how to avoid plagiarism.
  • Any non-original figures must similarly be cited. If you borrow an existing figure and modify it, you must still cite the original figure. It must be obvious what portion of your submission is your own creation. 
  • In written essays, you may not copy any content from any current or previous student in this class, regardless of whether you cite it or not. 
  • There is one exception to these policies: unless you are quoting the course videos directly, you are not required to cite content borrowed from the course itself (such as figures in videos, topics in the video, etc.). The assumption is that the reader knows what you write is based on your participation in this class, thus references to course material are not inferred to be claiming credit for the course content itself. 

Second, for code on course projects:

  • You may not under any circumstances copy any code from any current or former student in the class, or from any public project addressing the same content as the course projects, such as the Raven’s Progressive Matrices or a Block World agent. 
  • The only code segments you are permitted to borrow are isolated project-agnostic functions, meaning functions which serve a purpose that makes sense outside the context of our projects (such as, for example, inverting colors in an image). Include a link to the original source of the code and clearly note where the copied code begins and ends (for example, with /* BEGIN CODE FROM (source link) */ before and /* END CODE FROM (source link) */ after the copied code). This is partially to emphasize what your unique project and deliverable is, and partially to protect against instances where you and a classmate both borrowed a function from the same external repository. Note that annotating and attributing code is far easier than asking a TA if you need to attribute—if you need to ask, attribute it.

Third, for proctored assessments:

  • During all proctored assessments, you are prohibited from interacting directly with any other person on the topic of the exam material. This includes posting on forums, sending emails or text messages, talking in person or on the phone, or any other mechanism that would allow you to receive live input from another person.
  • During all proctored assessments, you may only use the device on which you are completing the assessment; you may not use other devices, even during open-book, open-note assessments as it is not possible to know whether secondary devices are being used to consult resources or to interact with others. This means that the result of using any keyboard and mouse should be observable in the session recording.
  • Finally, you may not take any content contained on proctored assessments out of the proctored assessment, such as writing down exam questions, taking screenshots, or sharing information with classmates. Any attempt to retain a copy of exam content, or to obtain or consult exam content retained by someone else, will be treated as academic misconduct.

These policies, including the rules on all pages linked in this section, are binding for the class. Any violations of this policy will be subject to the institute’s Academic Integrity procedures, which may include a 0 grade on assignments found to contain violations; additional grade penalties; and academic probation or dismissal.

Finally, note that you may not post the work that you submit for this class publicly either during or after the semester is concluded. We understand that the work you submit for this class may be valuable for job opportunities, personal web sites, etc.; you are welcome to write about what you did for this class, and to provide the actual work privately when requested, but we ask that you do not make your actual submissions or code publicly available; this is to reduce the likelihood of future students plagiarizing your work. Similarly, unless you notify us otherwise, by participating in this class you authorize us to pursue the removal of your content if it is discovered on any public assignment repositories, especially if it is clearly contributed there by someone else. 

Note that if you are accused of academic misconduct, you are not permitted to withdraw from the class until the accusation is resolved; if you are found to have participated in misconduct, you will not be allowed to withdraw for the duration of the semester. If you do so anyway, you will be forcibly re-enrolled without any opportunity to make up work you may have missed while illegally withdrawn. 

AI Collaboration Policy

Recent advancements in artificial intelligence—Copilot, ChatGPT, etc.—can be great resources for improving your learning in the course, but it is important to ensure that their benefits are targeted at your learning rather than solely at your deliverables. Toward that end, the same academic integrity policy above applies to AI assistance: you are welcome to consult with AI agents just as you would consult with classmates, discuss ideas with friends, and seek feedback from colleagues. However, just as you would not hand your device to someone else to directly fix or improve your classwork, so also you may not copy anything directly from an AI agent into your document, nor let an AI agent directly generate content for your submission. This rule means you should disable any AI assistance more advanced than a grammar checker inside your word processors and IDEs.

Although you are prohibited from having these tools directly integrated into your workspace or from copying content from these assistants directly into your work, you are nonetheless permitted to use them more generally. The important consideration is to ensure that you are using the AI agent as a learning assistant rather than as a homework assistant: so long as your submission solely reflects your own understanding of the content, you are encouraged to let AI assistants aid in developing your understanding.

Feedback 

Every semester, we make changes and tweaks to the course formula. As a result, every semester we try some new things, and some of these things may not work. We ask your patience and support as we figure things out, and in return, we promise that we, too, will be fair and understanding, especially with anything that might impact your grade or performance in the class. Second, we want to consistently get feedback on how we can improve and expand the course for future iterations. You can take advantage of the feedback box on the course forum (especially if you want to gather input from others in the class), give us feedback on the surveys, or contact us directly via private the course forum messages. 

For other questions, please first check out the Course FAQ.

Course Calendar

This class has a lot of moving parts: lectures, homeworks, projects, tests, surveys, peer review, participation, and more. It can be a lot to track, so this calendar provides a canonical list of everything you need to do on a weekly basis. If you check off all these tasks and deliverables, you’ve completed the coursework. Numbers in parentheses give the approximate amount of time we expect each task to require (in hours).

Note that we differentiate work on RPM Milestones from work on the project more generally; even after finishing Milestone 2, for instance, you will likely spend time further improving your agent’s performance on the problems covered by Milestone 2 in order to improve for the final project submission at the end of the term.

Week TasksDeliverablesDeadline
101/14/2024
201/21/2024
301/28/2024
402/04/2024
502/11/2024
602/18/2024
702/25/2024
803/03/2024
903/10/2024
1003/17/2024
1103/24/2024
1203/31/2024
1304/07/2024
1404/14/2024
15
  • Finish RPM project (9)
  • Complete peer reviews on RPM Milestone 4 (1)
04/21/2024
1604/28/2024
1705/05/2024

 

RPM Project Overview

RPM Project Overview

Our semester-long class project involves constructing an AI agent to address a human intelligence test. The project is due at the end of the semester, but there are a number of required milestones to pass along the way. These are (a) to ensure that you are getting an early enough start to have a chance for success and (b) to give you opportunities to see your classmates’ approaches and possibly incorporate their ideas into your own project.

This page covers the project as a whole, emphasizing what your end-goal is for the end of the semester.

In a (Large) Nutshell

The CS7637 class project is to create an AI agent that can pass a human intelligence test. You’ll download a code package that contains the boilerplate necessary to run an agent you design against a set of problems inspired by the Raven’s Progressive MatricesLinks to an external site. test of intelligence. Within it, you’ll implement the Agent.py file to take in a problem and return an answer.

There are four sets of problems for your agent to answer: B, C, D, and E. Each set contains four types of problems: BasicLinks to an external site., Test, ChallengeLinks to an external site., and Raven’s. You’ll be able to see the Basic and Challenge problems while designing your agent, and your grade will be based on your agent’s answers to the Basic and Test problems. The milestones throughout the semester will carry you through tackling more and more advanced problems: for Milestone 1, you’ll just familiarize yourself with the submission process and data structures. For Milestone 2, you’ll target the first set of problems, the relatively easy 2x2 problems from Set B. For Milestone 3, you’ll move on to the second set of problems, the more challenging 3x3 problems from Set C. For Milestone 4, you’ll look at the more difficult Set D and Set E problems, building toward the final deliverable a bit later.

For all problems, your agent will be given images that represent the problem in .png format. An example of a full problem is shown below; your agent would be given separate files representing the contents of squares A, B, C, 1, 2, 3, 4, 5, and 6.

2x2 Basic Problem 12 Image

Don’t worry if the above doesn’t make sense quite yet — the projects are a bit complex when you’re getting started. The goal of this section is just to provide you with a high-level view so that the rest of this document makes a bit more sense.

Background and Goals

This section covers the learning goals and background information necessary to understand the projects.

Learning Goals

One goal of Knowledge-Based Artificial Intelligence is to create human-like, human-level intelligence, and to use that to reflect on how humans actually think. If this is the goal of the field, then what better way to evaluate intelligence of an agent than by having it take the same intelligence tests that humans take?

There are numerous tests of human intelligence, but one of the most reliable and commonly-used is Raven’s Progressive Matrices. Raven’s Progressive Matrices, or RPM, are visual analogy problems where the test-taker is given a matrix of figures and asked to select the figure that completes the matrix. An example of a 2x2 problem was shown above; an example of a 3x3 problem is shown below.

3x3 Basic Problem 22 Image

In these projects, you will design agents that will address RPM-inspired problems such as the ones above. The goal of this project is to authentically experience the overall goals of knowledge-based AI: to design an agent with human-like, human-level intelligence; to test that agent against a set of authentic problems; and to use that agent’s performance to reflect on what we believe about human cognition. As such, you might not use every topic covered in KBAI on the projects; the topics covered give a bottom-up view of the topics and principles KBAI, while the project gives a top-down view of the goals and concepts of KBAI.

About the Test

The full Raven’s Progressive Matrices test consists of 60 visual analogy problems divided into five sets: A, B, C, D, and E. Set A is comprised of 12 simple pattern-matching problemsLinks to an external site. which we won’t cover in these projects. Set B is comprised of 12 2x2 matrix problems, such as the first image shown above. Sets C, D, and E are each comprised of 12 3x3 matrix problems, such as the second image shown above. Problems are named with their set followed by their number, such as problem B-05 or C-11. The sets are of roughly ascending difficulty.

For copyright reasons, we cannot provide the real Raven’s Progressive Matrices test to everyone. Instead, we’ll be giving you sets of problems — which we call “Basic” problems — inspired by the real RPM to use to develop your agent. Your agent will be evaluated based on how well it performs on these “Basic” problems, as well as a parallel set of “Test” problems that you will not see while designing your agent. These Test problems are directly analogous to the Basic problems; running against the two sets provides a check for generality and overfitting. Your agents will also run against the real RPM as well as a set of Challenge problems, but neither of these will be factored into your grade.

Overall, by the end of the semester, your agent will answer 192 problems. More on the specific problems that your agent will complete are in the sections that follow.

Each problem set (that is, Set B, Set C, Set D, and Set E) consists of 48 problems: 12 BasicLinks to an external site., 12 Test, 12 Raven’s, and 12 ChallengeLinks to an external site.. Only Basic and Test problems will be used in determining your grade. The Raven’s problems are run for authenticity and analysis, but are not used in calculating your grade.

In designing your agent, you will have access to the Basic and Challenge problems; you may run your agent locally to check its performance on these problems. You will not have access to the Test or Raven’s problems while designing and testing your agent: when you upload your agent to Gradescope, you will see how well it performs on those problems, but you will not see the details of the problems themselves. Challenge and Raven’s problems are not part of your grade.

Note that the Challenge problems will often be used to expose your agent to extra properties and shapes seen on the real Raven’s problems that are not covered in the Basic and Test problems. The problems themselves generally ascend in difficulty from set to set (although many people reflect that Set E is a bit easier than Set D.

Details & Deliverables

This section covers the more specific details of the four project milestones, as well as the final project you will submit.

Project Milestones

Your ultimate goal is to submit a final project that attempts all 192 problems. However, to help ensure that you start early and to give you an opportunity to see and learn from your classmates’ approaches, there are four intermediate milestones. On each of these milestones, your agent will only run against a subset of the full set of problems to allow you to test more efficiently. You will also write a brief report on your current approach for each milestone; the primary purpose of these reports will be to help you get feedback from classmates and see their approaches. For each milestone, you will be graded on a combination of your agent’s performance and the report that you write; the bars for performance on the milestones are relatively low, however, as the goal is to ensure that you are getting started early.

Each milestone has its own page. In brief, however:

  • Milestone 1: Set B, Basic Problems only. The goal of this milestone is simply to ensure you’ve set up your local project infrastructure and familiarized yourself with Gradescope. You will receive 100% of your performance credit as long as your agent answers any problem correctly. Your report will focus on early ideas you have for approaching the project.
  • Milestone 2: Set B, all problems. The goal of this milestone is to ensure you have started on the early, easier problems early in the semester. As long as your agent can answer 5 (out of 12) Basic B and 5 (out of 12) Test B problems correctly, you will receive full performance credit.
  • Milestone 3: Set C. The goal of this milestone is to ensure you have generalized your approach out to the more difficult 3x3 problems by an appropriate time of the semester. As long as your agent can answer 5 (out of 12) Basic C and 5 (out of 12) Test C problems correctly, you will receive full performance credit.
  • Milestone 4: Sets D and E. The goal of this milestone is to ensure you have looked at all four sets before the final project deadline, so that you may spend the last portion of the semester refining, improving, and writing your final report. As long as your agent can answer 10 (out of 24) Basic D & E and 10 (out of 24) Test D & E problems, you will receive full performance credit.

For each milestone, your code must be submitted to the autograder by the deadline. However, it is okay if your project is still running after the deadline. Note that Gradescope by default counts your last submission for a grade; if you want to count an earlier submission, you must activate that earlier submission.

On each milestone, your grade will be 50% meeting the performance expectations and 50% the report you write up. You will submit your agent to Gradescope and your report to Canvas as a PDF. The four milestones together are 15% of your course grade; each is thus 3.75% of your course grade.

Final Project

The final project will run against all 192 problems. You can submit to the final project throughout the semester to see how your agent is doing so far, but you should make sure to submit to the Milestone submissions as well.

More information about the final project is available on the final project page. For the final project, you will write a longer, more formal, and more complete report on your project. Your score will be based on raw performance on the Basic and Test problems. Like the milestones, performance will be 50% of your grade and your report will be 50% of your grade. Your final project is 15% of your course grade.

Getting Started

To make it easier to start the project and focus on the concepts involved (rather than the nuts and bolts of reading in problems and writing out answers), you’ll be working from an agent framework in Python.

You will place your code into the Solve method of the Agent class supplied. You can also create any additional methods, classes, and files needed to organize your code; Solve is simply the entry point into your agent.

The Problem Sets

As mentioned previously, by the final project, your agent will run against 192 problems: 4 Sets of 48 problems. Each of the 4 Sets is broken down into four subsets of 12: 12 Basic, 12 Test, 12 Raven’s, and 12 Challenge.

You can see the Basic and Challenge problems and test your agent’s performance on them locally. You cannot see the Test and Raven’s problems, and your agent’s performance will only be tested when you submit to Gradescope. Your grade will be based solely on the Basic and Test problems.

The Raven’s problems are used so that you can see how your agent is performing on the real Raven’s test. The Challenge problems are primarily there to expose your agent to certain details that are present in the Raven’s problems but not in the Basic problems (such as shapes shaded with diagonal lines).

Within each set, the Basic, Test, and Raven’s problems are constructed to be roughly analogous to one another. The Basic problem is constructed to mimic the relationships and transformations in the corresponding Raven’s problem, and the Test problem is constructed to mimic the Basic problem very, very closely. So, if you see that your agent gets Basic problem B-05 correct but Test and Raven’s problems B-05 wrong, you know that might be a place where your agent is either overfitting or getting lucky. This also means you can anticipate your agent’s performance on the Test problems relatively well: each Test problem uses a near-identical principle to the corresponding Basic problem. In the past, agents have averaged getting 85% as many Test problems right as Basic problems, so there’s a pretty good correlation there if you’re using a robust, general method.

The Problems

You are provided with the BasicLinks to an external site. and Challenge Links to an external site.problems to use in designing your agent. The Test and Raven’s problems are hidden and will only be used when grading your project. This is to test your agents for generality: it isn’t hard to design an agent that can answer questions it has already seen, just as it would not be hard to score well on a test you have already taken before. However, performing well on problems you and your agent haven’t seen before is a more reliable test of intelligence. Your grade is based solely on your agent’s performance on the Basic and Test problems.

All problems are contained within the Problems folder of the downloadable. Problems are divided into sets, and then into individual problems. Each problem’s folder has four types of things:

  • The problem itself, for your benefit.
  • A ProblemData.txt file, containing information about the problem, including its type (the two booleans are for internal usage to distinguish whether the problem has Visual representations which will always be true and whether or not the problem has descriptive data which is not in use at this time)
  • A ProblemAnswer.txt file containing the answer to the problem.
  • Visual representations of each figure, named A.png, B.png, etc.

You should not attempt to access ProblemData.txt or ProblemAnswer.txt directly; their filenames will be changed when we grade projects. Generally, you need not worry about this directory structure; all problem data will be loaded into the RavensProblem object passed to your agent’s Solve method, and the filenames for the different visual representations will be included in their corresponding RavensFigures.

Working with the Code

The framework code is available  under Getting Started above. You may modify ProblemSetList.txt to alter what problem sets your code runs against locally; this will be useful early in the term when you probably do not need to bother thinking about later problem sets yet. This will not affect what it runs against on Gradescope.

The Code

The downloadable package has a number of Python files: RavensProject, ProblemSet, RavensProblem, RavensFigure, and Agent. Of these, you should only modify the Agent class. You may make changes to the other classes to test your agent, write debug statements, etc. However, when we test your code, we will use the original versions of these files as downloaded here. Do not rely on changes to any class except for Agent to run your code. In addition to Agent, you may also write your own additional files and classes for inclusion in your project.

In Agent, you will find two methods: a constructor and a Solve method. The constructor will be called at the beginning of the program, so you may use this method to initialize any information necessary before your agent begins solving problems. After that, Solve will be called on each problem. You should write the Solve method to return its answer to the given question:

  • 2x2 questions have six answer options, so to answer the question, your agent should return an integer from 1 to 6.
  • 3x3 questions have eight answer options, so your agent should return an integer from 1 to 8.
  • If your agent wants to skip a question, it should return a negative number. Any negative number will be treated as your agent skipping the problem.

You may do all the processing within Solve, or you may write other methods and classes to help your agent solve the problems.

When running, the program will load questions from the Problems folder. It will then ask your agent to solve each problem one by one and write the results to ProblemResults.csv. You may check ProblemResults.csv to see how well your agent performed. You may also check SetResults.csv to view a summary of your agent’s performance at the set level.

The Documentation

  • RavensProject: The main driver of the project. This file will load the list of problem sets, initialize your agent, then pass the problems to your agent one by one.
  • RavensGrader: The grading file for the project. After your agent generates its answers, this file will check the answers and assign a score.
  • Agent: The class in which you will define your agent. When you run the project, your Agent will be constructed, and then its Solve method will be called on each RavensProblem. At the end of Solve, your agent should return an integer as the answer for that problem (or a negative number to skip that problem).
  • ProblemSet: A list of RavensProblems within a particular set.
  • RavensProblem: A single problem, such as the one shown earlier in this document. A RavensProblem includes:
    • A Dictionary of the individual Figures (that is, the squares labeled “A”, “B”, “C”, “1”, “2”, etc.) from the problem. The RavensFigures associated with keys “A”, “B”, and “C” are the problem itself, and those associated with the keys “1”, “2”, “3”, “4”, “5”, and “6” are the potential answer choices.
    • A String representing the name of the problem and a String representing the type of problem (“2x2” or “3x3”).
  • RavensFigure: A single square from the problem, labeled either “A”, “B”, “C”, “1”, “2”, etc., containing a filename referring to the visual representation (in PNG form) of the figure’s contents

The documentation is ultimately somewhat straightforward, but it can be complicated when you’re initially getting used to it. The most important things to remember are:

  • Every time Solve is called, your agent is given a single problem. By the end of Solve, it should return an answer as an integer. You don’t need to worry about how the problems are loaded from the files, how the problem sets are organized, or how the results are printed. You need only worry about writing the Solve method, which solves one question at a time.
  • RavensProblems have a dictionary of RavensFigures, with each Figure representing one of the image squares in the problem and each key representing its letter (squares in the problem matrix) or number (answer choices). All RavensFigures have filenames so your agent can load the PNG with the visual representation.

Libraries

The permitted libraries for this term’s project are:

  • The Python image processing library Pillow (version 10.0.0). For installation instructions on Pillow, see this pageLinks to an external site..
  • The Numpy library (1.25.2 at time of writing). For installation instructions on numpy, see this page.
  • OpenCV (4.6.0, opencv-contrib-python-headless (4.6.0.66 at the time of writing)). For installation instructions, see this page.

Additionally, we use Python 3.10.12 for our autograder.

Submitting Your Code

This class uses Gradescope, a server-side autograder, to evaluate your submission. This means you can see how your code is performing against the Test and Raven’s problems even without seeing the problems themselves. You will have access to separate areas to submit against the Milestone checks and to submit for the final project.

Submitting

To get started submitting your code, go to Canvas and click Gradescope on the left sidebar. Then, click CS 4635/7637 in the page that loads.

You will see five project options: Milestone 1, Milestone 2, Milestone 3, Milestone 4 and Final Project.

Milestone 1 will run your code only against the Basic B problems. Milestone 2 will run your code only against the Basic B, Test B, Challenge B, and Raven’s B problems. Milestone 3 will run your code against the Basic C, Test C, Challenge C, and Raven’s C problems. Milestone 4 will run your code against the Basic D+E, Test D+E, Challenge D+E, and Raven’s D+E problems.

To submit your code, drag and drop your project files (Agent.py and any support files you created; you should not submit the other files that we supplied) into the submission window. You may also zip your code up and upload the zip file. You’ll receive a confirmation message if your submission was successful; otherwise, take the corrective action specified in the error message and resubmit.

Getting Your Results

Then, wait for the autograder to finish. If there are no errors, you will see a detailed summary of your results. Each row in the test result output is the result of a single problem from the problem set: you’ll see a score of Pass label if your agent answered the question correctly, and a Fail label if your agent did not answer the question correctly. Your overall score can be found at the top right of the screen and in the summary at the bottom of the test results.

If you’d like to resubmit your code, click “Resubmit” found at the bottom-right corner of the autograder results page. You can analyze and compare your submission results using the “Submission History” button on this page, too.

Selecting Your Results

Once you have made your last submission, click Submission History, and then click Active next to your best submission. This is the only way to commit your final score to Canvas and get points; Gradescope does not select your best score automatically. You must do this to receive points for your submission.

After the deadline, we will import your score to Canvas to use in calculating your final score on that milestone or the project.

Permitted Approaches

Generally speaking, you are allowed to pursue any approach that authentically attempts to answer the problems based solely on the content of the frames. You can mimic how a human would approach it, take a computer vision approach, or implement other mechanisms to reason over the problems.

You are not, however, permitted to use any approach that essentially involves probing the autograder for information about the correct answer solely for the purpose of identifying the same answer again on a resubmission. For example, you may not use any approach that deterministically reshuffling the answer frames such that you know the correct answer is guaranteed to land in a certain spot, then hardcoding that spot into your agent.

There are features in the autograder now to prevent these sorts of solutions from working, but there may still be ways around them. If you want to stress test such approaches against the autograder and let us know the results, you’re welcome to: just make sure your final submission uses a more authentic approach.

Relevant Resources

Goel, A. (2015). Geometry, Drawings, Visual Thinking, and Imagery: Towards a Visual Turing Test of Machine IntelligenceLinks to an external site.. In Proceedings of the 29th Association for the Advancement of Artificial Intelligence Conference Workshop on Beyond the Turing Test. Austin, Texas.

McGreggor, K., & Goel, A. (2014). Confident Reasoning on Raven’s Progressive Matrices TestsLinks to an external site.. In Proceedings of the 28th Association for the Advancement of Artificial Intelligence Conference. Québec City, Québec.

Kunda, M. (2013). Visual problem solving in autism, psychometrics, and AI: the case of the Raven’s Progressive Matrices intelligence testLinks to an external site.. Doctoral dissertation.

Emruli, B., Gayler, R. W., & Sandin, F. (2013). Analogical mapping and inference with binary spatter codes and sparse distributed memoryLinks to an external site.. In Neural Networks (IJCNN), The 2013 International Joint Conference on. IEEE.

Little, D., Lewandowsky, S., & Griffiths, T. (2012). A Bayesian model of rule induction in Raven’s progressive matricesLinks to an external site.. In Proceedings of the 34th Annual Conference of the Cognitive Science Society. Sapporo, Japan.

Kunda, M., McGreggor, K., & Goel, A. K. (2012). Reasoning on the Raven’s advanced progressive matrices test with iconic visual representationsLinks to an external site.. In 34th Annual Conference of the Cognitive Science Society. Sapporo, Japan.

Lovett, A., & Forbus, K. (2012). Modeling multiple strategies for solving geometric analogy problemsLinks to an external site.. In 34th Annual Conference of the Cognitive Science Society. Sapporo, Japan.

Schwering, A., Gust, H., Kühnberger, K. U., & Krumnack, U. (2009). Solving geometric proportional analogies with the analogy model HDTPLinks to an external site.. In 31st Annual Conference of the Cognitive Science Society. Amsterdam, Netherlands.

Joyner, D., Bedwell, D., Graham, C., Lemmon, W., Martinez, O., & Goel, A. (2015). Using Human Computation to Acquire Novel Methods for Addressing Visual Analogy Problems on Intelligence Tests. In Proceedings of the Sixth International Conference on Computational Creativity. Provo, Utah.

…and many moreLinks to an external site.!

RPM Milestone 1

RPM Project: Milestone 1

First, make sure to read the full project overview. It contains instructions for the project as a whole and getting started with the code. This page describes only what you should do for Milestone 1 within the broader context of that project overview.

For Milestone 1, your goal is to simply demonstrate that you have set up the project infrastructure to run on your local computer and have made a submission to Gradescope. 50% of your grade on Milestone 1 is earned by meeting the minimum performance requirement; 50% of your grade is earned by completing the milestone journal.

Performance Requirement

For Milestone 1, your performance requirement is to answer one problem correctly in Gradescope. As long as you answer at least one problem correctly in a Milestone 1 Gradescope submission, you will earn the full performance credit for Milestone 1. Your answer can be hard-coded, randomly selected, etc.; the goal is simply to show that you can modify the Agent.py file to generate an answer to a problem and submit that to Gradescope.

While that is the only requirement, we highly recommend also familiarizing yourself with loading a problem from a file into Pillow and doing some initial pre-processing on it. Historically, many students have reflected that they had difficulty on the project because they started to explore using Pillow and image processing too late; if you take this opportunity to familiarize yourself with the images, you will be in a far better position to succeed going forward.

Submission Instructions

To fulfill the Performance Requirement, follow the directions on the full project overview for submitting to Gradescope, and then submit your agent to the Milestone 1 assignment.

After submitting to the Milestone 1 assignment in Gradescope, make sure to select which submission you want to have count for your graded submission. By default, Gradescope will use your latest submission, but you may want to use an earlier one. After the deadline, this will be exported to Canvas to calculate your final Milestone grade.

This is an individual assignment. All work you submit should be your own. Make sure to cite any sources you reference or code you use (in accordance with the broader class policy on code reuse).

Milestone Journal

In addition to submitting an agent to Gradescope, you will also submit a brief milestone journal to the Milestone 1 assignment in Canvas. Your Milestone Journal must be written in JDF format. There is no maximum length; we expect most submissions to be around 5 pages, but you may write more if you would like. Writing the journal is intended to be a useful exercise for you first and foremost: it should let you externalize and formalize your ideas, it should let you get feedback from your classmates, and it should let your classmates learn from you. Your journal should not include actual code; it should just include your ideas.

For Milestone 1, you may not have done much actual development on your project. Instead, this is an opportunity to begin brainstorming how you will approach the project. You should answer the following questions:

  • How would you, as a human, reason through the problems on the Raven’s test? Choose three or four problems and describe your human approach.
  • How do you expect you will design an agent to approach these problems? Will you try to generate a verbal representation of the images, like identifying shapes and their relative positions to one another? Will you try to use some heuristic methods, like looking for patterns in the changing numbers of shapes, number of darkened pixels, etc.? Will you have your agent select from multiple strategies based on what it sees in a particular problem? You do not need to answer all these questions specifically, but these are examples of the types of questions you may answer in previewing how you expect to design your agent. Explain why your Agent’s approach will work in addressing some of the problems identified in the first bullet point.
  • As it relates to the first and second bullet point, what do you anticipate your biggest challenges in designing your agent will be and why?
  • Ensure your understanding of the Lecture and/or Ed lessons is reflected in the Journal by demonstrating application of the terminology used in Lectures and/or Ed lessons.

Submission Instructions

Complete your assignment using JDF format, then save your submission as a PDF. Assignments should be submitted via this Canvas page. You should submit a single PDF for this assignment. This PDF will be ported over to Peer Feedback for peer review by your classmates. If your assignment involves things (like videos, working prototypes, etc.) that cannot be provided in PDF, you should provide them separately (through OneDrive, Google Drive, Dropbox, etc.) and submit a PDF that links to or otherwise describes how to access that material.

After submitting, download your submission from Canvas to verify that you’ve uploaded the correct file. Review that any included figures are legible at standard magnification, with text or symbols inside figures at equal or greater size than figure captions. 

This is an individual assignment. All work you submit should be your own. Make sure to cite any sources you reference, and use quotes and in-line citations to mark any direct quotes.

Late work is not accepted without advance agreement via the extension request process except in cases of medical or family emergencies. In the case of such an emergency, please contact the Dean of Students.

Peer Review

After submission, your assignment will be ported to Peer Feedback for review by your classmates. Grading is not the primary function of this peer review process; the primary function is simply to give you the opportunity to read and comment on your classmates’ ideas, and receive additional feedback on your own. All grades will come from the graders alone. See the course participation policy for full details about how points are awarded for completing peer reviews.

 

RPM Milestone 2

RPM Project: Milestone 2

First, make sure to read the full project overview. It contains instructions for the project as a whole and getting started with the code. This page describes only what you should do for Milestone 2 within the broader context of that project overview.

For Milestone 2, your goal is to simply demonstrate that you have made some progress in creating an agent that can address the Set B problems of the Raven’s test, especially the Basic B and Test B problems. 50% of your grade on Milestone 2 is earned by meeting the minimum performance requirement; 50% of your grade is earned by completing the milestone journal.

Performance Requirement

For Milestone 2, you will earn 5% of your Milestone grade for each Basic B problem you get right up to a maximum of 25%. You will also earn 5% of your Milestone grade for each Test B problem you get right up to a maximum of 25%.

In other words: if your agent answers at least 5 Basic B and 5 Test B problems correctly, you earn full credit for the Performance Requirement of Milestone 2. Each problem fewer than that your agent can answer results in a deduction of 5% from your Milestone grade.

Submission Instructions

To fulfill the Performance Requirement, follow the directions on the full project overview for submitting to Gradescope, and then submit your agent to the Milestone 2 assignment.

After submitting to the Milestone 2 assignment in Gradescope, make sure to select which submission you want to have count for your graded submission. By default, Gradescope will use your latest submission, but you may want to use an earlier one. After the deadline, this will be exported to Canvas to calculate your final Milestone grade.

This is an individual assignment. All work you submit should be your own. Make sure to cite any sources you reference or code you use (in accordance with the broader class policy on code reuse).

Milestone Journal

In addition to submitting an agent to Gradescope, you will also submit a brief milestone journal to the Milestone 2 assignment in Canvas. Your Milestone Journal must be written in JDF formatLinks to an external site.. There is no maximum length; we expect most submissions to be around 5 pages, but you may write more if you would like. Writing the journal is intended to be a useful exercise for you first and foremost: it should let you externalize and formalize your ideas, it should let you get feedback from your classmates, and it should let your classmates learn from you. Your journal should not include actual code; it should just include a description of your agent’s approach.

Note that your Milestone 2 should be all original content created by you; if there is content that you wrote for a previous Milestone that is still pertinent, you may refer back to that content again (including quoting yourself), and then go on to discuss what has changed or what is new (or, why the same content you wrote previously is still so applicable).

For example, you might write:

In Milestone 2, I wrote that my agent works by “calculating a percentage change in the number of black pixels between each pair of frames, and checking for mathematical patterns in the changing ratio of black pixels. Then, I checked each answer option to see if it maintained the observed mathematical pattern.” For Set B, that continued to work, but I had to modify my code to include a check for exponential growth rather than just linear growth.

If you need to quote large portions of your prior writing, you can use a blockquote, or include your prior Milestone in an appendix that you refer to. The important element is that your TAs and classmates should be able to identify the new content easily.

For Milestone 2, you should answer the following questions:

  • How does your agent currently function? Depending on the inner workings of your agent, there may be a lot of different ways to describe this. For example, does it select from multiple problem-solving approaches depending on what it sees in the problem? Does it perform shape recognition or direct pixel comparison? Does it generate a candidate solution and compare it to the options, or does it take each potential answer and assess its likelihood to be the correct answer? You need not answer these specific questions, but they are examples of ways you might describe your agent’s design.
  • How well does your agent currently perform? How many problems does it get right on the Set B problems? 
  • What problems does your agent perform well on? What problems (if any) does it struggle on? Why do you think it performs well on some but struggles on others (if any)?
  • How efficient is your agent? Does it take a long time to run? (Give specific metrics) Does it slow down significantly on certain kinds of problems? If so, by how much? Why do you think your agent runs slower on certain problems?
  • How do you plan to improve your agent’s performance on these problems before the final project submission?
  • How do you plan to generalize your agent’s design to cover 3x3 problems in addition to the 2x2 problems?
  • What feedback would you hope to get from classmates about how your agent could do better? What are some of the challenges you are facing (or think you will face later) that could benefit from someone else’s feedback?
  • Ensure your understanding of the Lecture and/or Ed lessons is reflected in the Journal by demonstrating application of the terminology used in Lectures and/or Ed lessons.

Submission Instructions

Complete your assignment using JDF format, then save your submission as a PDF. Assignments should be submitted via this Canvas page. You should submit a single PDF for this assignment. This PDF will be ported over to Peer Feedback for peer review by your classmates. If your assignment involves things (like videos, working prototypes, etc.) that cannot be provided in PDF, you should provide them separately (through OneDrive, Google Drive, Dropbox, etc.) and submit a PDF that links to or otherwise describes how to access that material.

After submitting, download your submission from Canvas to verify that you’ve uploaded the correct file. Review that any included figures are legible at standard magnification, with text or symbols inside figures at equal or greater size than figure captions. 

This is an individual assignment. All work you submit should be your own. Make sure to cite any sources you reference, and use quotes and in-line citations to mark any direct quotes.

Late work is not accepted without advance agreement via the extension request process except in cases of medical or family emergencies. In the case of such an emergency, please contact the Dean of Students.

Peer Review

After submission, your assignment will be ported to Peer Feedback for review by your classmates. Grading is not the primary function of this peer review process; the primary function is simply to give you the opportunity to read and comment on your classmates’ ideas, and receive additional feedback on your own. All grades will come from the graders alone. See the course participation policy for full details about how points are awarded for completing peer reviews.

 

RPM Milestone 3

RPM Project: Milestone 3

First, make sure to read the full project overviewIt contains instructions for the project as a whole and getting started with the code. This page describes only what you should do for Milestone 3 within the broader context of that project overview.

For Milestone 3, your goal is to demonstrate that you have generalized your approach out to cover the types of 3x3 problems present in Set C. 50% of your grade on Milestone 3 is earned by meeting the minimum performance requirement; 50% of your grade is earned by completing the milestone journal.

Performance Requirement

For Milestone 3, you will earn 5% of your Milestone grade for each Basic C problem you get right up to a maximum of 25%. You will also earn 5% of your Milestone grade for each Test C problem you get right up to a maximum of 25%.

In other words: if your agent answers at least 5 Basic C and 5 Test C problems correctly, you earn full credit for the Performance Requirement of Milestone 3. Each problem fewer than that that your agent can answer results in a deduction of 5% from your Milestone grade.

Submission Instructions

To fulfill the Performance Requirement, follow the directions on the full project overview for submitting to Gradescope, and then submit your agent to the Milestone 3 assignment.

After submitting to the Milestone 3 assignment in Gradescope, make sure to select which submission you want to have count for your graded submission. By default, Gradescope will use your latest submission, but you may want to use an earlier one. After the deadline, this will be exported to Canvas to calculate your final Milestone grade.

This is an individual assignment. All work you submit should be your own. Make sure to cite any sources you reference or code you use (in accordance with the broader class policy on code reuse).

Milestone Journal

In addition to submitting an agent to Gradescope, you will also submit a brief milestone journal to the Milestone 3 assignment in Canvas. Your Milestone Journal must be written in JDF format. There is no maximum length; we expect most submissions to be around 5 pages, but you may write more if you would like. Writing the journal is intended to be a useful exercise for you first and foremost: it should let you externalize and formalize your ideas, it should let you get feedback from your classmates, and it should let your classmates learn from you. Your journal need should not include actual code; it should just include a description of your agent’s approach.

Note that your Milestone 3 should be all original content created by you; if there is content that you wrote for a previous Milestone that is still pertinent, you may refer back to that content again (including quoting yourself), and then go on to discuss what has changed or what is new (or, why the same content you wrote previously is still so applicable).

For example, you might write:

In Milestone 3, I wrote that my agent works by “calculating a percentage change in the number of black pixels between each pair of frames, and checking for mathematical patterns in the changing ratio of black pixels. Then, I checked each answer option to see if it maintained the observed mathematical pattern.” For Set C, that continued to work, and in fact the patterns were easier to find because they were fit to 28 possible pairs instead of just 3.

If you need to quote large portions of your prior writing, you can use a blockquote, or include your prior Milestone in an appendix that you refer to. The important element is for TAs and classmates to be able to identify the new content.

For Milestone 3, you should answer the following questions:

  • How does your agent currently function? Depending on the inner workings of your agent, there may be a lot of different ways to describe this. For example, does it select from multiple problem-solving approaches depending on what it sees in the problem? Does it perform shape recognition or direct pixel comparison? Does it generate a candidate solution and compare it to the options, or does it take each potential answer and assess its likelihood? You need not answer these specific questions, but they are examples of ways you might describe your agent’s design.
  • How well does your agent currently perform? How many problems does it get right on the Set C problems?
  • What problems does your agent perform well on? What problems (if any) does it struggle on? Why do you think it performs well on some but struggles on others (if any)?
  • How efficient is your agent? Does it take a long time to run? (Give specific metrics) Does it slow down significantly on certain kinds of problems? If so, by how much? Why do you think your agent runs slower on certain problems?
  • How do you plan to improve your agent’s performance on these problems before the final project submission?
  • Looking ahead to Sets D and E, which problems do you think your agent will be able to solve at its present stage? Which problems will it struggle on?
  • What feedback would you hope to get from classmates about how your agent could do better? What are some of the challenges you are facing (or think you will face later) that could benefit from someone else’s feedback?
  • Ensure your understanding of the Lecture and/or Ed lessons is reflected in the Journal by demonstrating application of the terminology used in Lectures and/or Ed lessons.

Submission Instructions

Complete your assignment using JDF format, then save your submission as a PDF. Assignments should be submitted via this Canvas page. You should submit a single PDF for this assignment. This PDF will be ported over to Peer Feedback for peer review by your classmates. If your assignment involves things (like videos, working prototypes, etc.) that cannot be provided in PDF, you should provide them separately (through OneDrive, Google Drive, Dropbox, etc.) and submit a PDF that links to or otherwise describes how to access that material.

After submitting, download your submission from Canvas to verify that you’ve uploaded the correct file. Review that any included figures are legible at standard magnification, with text or symbols inside figures at equal or greater size than figure captions. 

This is an individual assignment. All work you submit should be your own. Make sure to cite any sources you reference, and use quotes and in-line citations to mark any direct quotes.

Late work is not accepted without advance agreement via the extension request process except in cases of medical or family emergencies. In the case of such an emergency, please contact the Dean of Students.

Peer Review

After submission, your assignment will be ported to Peer Feedback for review by your classmates. Grading is not the primary function of this peer review process; the primary function is simply to give you the opportunity to read and comment on your classmates’ ideas, and receive additional feedback on your own. All grades will come from the graders alone. See the course participation policy for full details about how points are awarded for completing peer reviews.

 

RPM Milestone 4

RPM Project: Milestone 4

First, make sure to read the full project overview. It contains instructions for the project as a whole and getting started with the code. This page describes only what you should do for Milestone 4 within the broader context of that project overview.

For Milestone 4, your goal is to demonstrate that you have addressed all the problems covered by the final project. This should leave you the last part of the term to improve your agent, borrow and experiment with ideas from your classmates, and write your final, more formal report. 50% of your grade on Milestone 4 is earned by meeting the minimum performance requirement; 50% of your grade is earned by completing the milestone journal.

Performance Requirement

For Milestone 4, you will earn 2.5% of your Milestone grade for each Basic D and Basic E problem you get right up to a maximum of 25%. You will also earn 2.5% of your Milestone grade for each Test D and Test E problem you get right up to a maximum of 25%.

In other words: if your agent answers at least 10 Basic D+E and 10 Test D+E problems correctly, you earn full credit for the Performance Requirement of Milestone 4. Each problem fewer than that that your agent can answer results in a deduction of 2.5% from your Milestone grade.

Submission Instructions

To fulfill the Performance Requirement, follow the directions on the full project overview for submitting to Gradescope, and then submit your agent to the Milestone 4 assignment.

After submitting to the Milestone 4 assignment in Gradescope, make sure to select which submission you want to have count for your graded submission. By default, Gradescope will use your latest submission, but you may want to use an earlier one. After the deadline, this will be exported to Canvas to calculate your final Milestone grade.

This is an individual assignment. All work you submit should be your own. Make sure to cite any sources you reference or code you use (in accordance with the broader class policy on code reuse).

Milestone Journal

In addition to submitting an agent to Gradescope, you will also submit a brief milestone journal to the Milestone 4 assignment in Canvas. Your Milestone Journal must be written in JDF format. There is no maximum length; we expect most submissions to be around 5 pages, but you may write more if you would like. Writing the journal is intended to be a useful exercise for you first and foremost: it should let you externalize and formalize your ideas, it should let you get feedback from your classmates, and it should let your classmates learn from you. Your journal should not include actual code; it should just include a description of your agent’s approach.

Note that your Milestone 4 should be all original content; if there is content that you wrote for a previous Milestone that is still pertinent, you may refer back to that content again (including quoting yourself), and then go on to discuss what has changed or what is new (or, why the same content you wrote previously is still so applicable).

For example, you might write:

In Milestone 1, I wrote that my agent works by “calculating a percentage change in the number of black pixels between each pair of frames, and checking for mathematical patterns in the changing ratio of black pixels. Then, I checked each answer option to see if it maintained the observed mathematical pattern.” Then, in Milestone 3, I wrote that “For Set C, that continued to work, and in fact the patterns were easier to find because they were fit to 28 possible pairs instead of just 3.” However, for Sets D and E, that approach struggled because fewer problems are solvable by sequences of pixel ratio changes. So, I had to add an additional heuristic method…

If you need to quote large portions of your prior writing, you can use a blockquote, or include your prior Milestone in an appendix that you refer to. The important element is for TAs and classmates to be able to identify the new content.

For Milestone 4, you should answer the following questions:

  • How does your agent currently function? Depending on the inner workings of your agent, there may be a lot of different ways to describe this. For example, does it select from multiple problem-solving approaches depending on what it sees in the problem? Does it perform shape recognition or direct pixel comparison? Does it generate a candidate solution and compare it to the options, or does it take each potential answer and assess its likelihood? You need not answer these specific questions, but they are examples of ways you might describe your agent’s design.
  • How well does your agent currently perform? How many problems does it get right on the Set D+E problems?
  • What problems does your agent perform well on? What problems (if any) does it struggle on? Why do you think it performs well on some but struggles on others (if any)?
  • How efficient is your agent? Does it take a long time to run? (Give specific metrics) Does it slow down significantly on certain kinds of problems? If so, by how much? Why do you think your agent runs slower on certain problems?
  • How do you plan to improve your agent’s design and performance on these problems before the final project submission?
  • What feedback would you hope to get from classmates about how your agent could do better? What are some of the challenges you are facing (or think you will face later) that could benefit from someone else’s feedback?
  • Ensure your understanding of the Lecture and/or Ed lessons is reflected in the Journal by demonstrating application of the terminology used in Lectures and/or Ed lessons.

Submission Instructions

Complete your assignment using JDF format, then save your submission as a PDF. Assignments should be submitted via this Canvas page. You should submit a single PDF for this assignment. This PDF will be ported over to Peer Feedback for peer review by your classmates. If your assignment involves things (like videos, working prototypes, etc.) that cannot be provided in PDF, you should provide them separately (through OneDrive, Google Drive, Dropbox, etc.) and submit a PDF that links to or otherwise describes how to access that material.

After submitting, download your submission from Canvas to verify that you’ve uploaded the correct file. Review that any included figures are legible at standard magnification, with text or symbols inside figures at equal or greater size than figure captions. 

This is an individual assignment. All work you submit should be your own. Make sure to cite any sources you reference, and use quotes and in-line citations to mark any direct quotes.

Late work is not accepted without advance agreement via the extension request process except in cases of medical or family emergencies. In the case of such an emergency, please contact the Dean of Students.

Peer Review

After submission, your assignment will be ported to Peer Feedback for review by your classmates. Grading is not the primary function of this peer review process; the primary function is simply to give you the opportunity to read and comment on your classmates’ ideas, and receive additional feedback on your own. All grades will come from the graders alone. See the course participation policy for full details about how points are awarded for completing peer reviews.

Final RPM Project

RPM Project: Final Project

First, make sure to read the full project overview. It contains instructions for the project as a whole and getting started with the code. This page describes only what you should do for Milestone 1 within the broader context of that project overview.

For the final project submission, you will do the same thing you have done on the milestones, but you will be graded on your agent’s final performance on all Basic and Test problems. You will also write a more formal report on your final agent design. 50% of your project grade is earned based on your agent’s performance; 50% of your grade is earned by completing the final project report.

Performance Score

For the final project, you will receive 1 point for each Basic and Test problem your agent correctly solve across all 96 Basic and Test problems. Your maximum score is therefore 96/96, which would earn the full credit for your performance score. Answering 77 out of 96, for instance, would earn you an 80% for a performance score, which would be 40 points (out of 100) for your full project grade.

Submission Instructions

To earn your performance score, follow the directions on the full project overview for submitting to Gradescope, and then submit your agent to the Final Project assignment.

After submitting to the Final Project assignment in Gradescope, make sure to select which submission you want to have count for your graded submission. By default, Gradescope will use your latest submission, but you may want to use an earlier one. After the deadline, this will be exported to Canvas to calculate your final Milestone grade.

This is an individual assignment. All work you submit should be your own. Make sure to cite any sources you reference or code you use (in accordance with the broader class policy on code reuse).

Final Project Report

In addition to submitting an agent to Gradescope, you will also submit a Final Project Report to the Final Project assignment in Canvas. Your Final Project Report must be written in JDF format. It may be up to 10 pages. If you need to include more than 10 pages of content, you may include them in the appendices, but note that content in appendices will not be used for grading; it may just be useful if you want to include long sequences of images or diagrams that quickly meet the page limit.

Your Final Project Report is more formal than your Milestone Journals; it is meant to present the design of your agent and your observations on its performance and relationship to human cognition. Your Final Project Report will also be graded more strictly as well. Unlike your Milestones, however, your Final Project Report may borrow content directly from your Milestone descriptions as it is intended to be a standalone final write-up.

Your Final Project Report should answer the following questions. They should be answered separately; we want to see a description of your agent’s process, and then see that description applied to selected problems.

  • How does your agent work?
  • How well does your agent perform across all the Sets of problems? Please give specific metrics. 
  • What problems does your agent solve successfully? Select 3 (or more) problems, ideally with significant differences, that your agent solves correctly and describe your agent’s reasoning process in each problem.
  • Where does your agent struggle? Select 2 (or more) problems that your agent does not solve correctly and describe the breakdown in your agent’s reasoning process in each problem.
  • How would you characterize your overall approach to designing your agent? For example, is your agent described by an ever-expanding suite of heuristics it chooses from? Did you throw out one approach altogether and start over with a new one?
  • Compare the way(s) your final agent approaches the problems and the way(s) you, a human (we assume) would approach the problems. Does your agent take a similar/the same approach? Why or why not?
  • Ensure your understanding of the Lecture and/or Ed lessons is reflected in the Journal by demonstrating application of the terminology used in Lectures and/or Ed lessons.

Submission Instructions

Complete your assignment using JDF format, then save your submission as a PDF. Assignments should be submitted via this Canvas page. You should submit a single PDF for this assignment. This PDF will be ported over to Peer Feedback for peer review by your classmates. If your assignment involves things (like videos, working prototypes, etc.) that cannot be provided in PDF, you should provide them separately (through OneDrive, Google Drive, Dropbox, etc.) and submit a PDF that links to or otherwise describes how to access that material.

After submitting, download your submission from Canvas to verify that you’ve uploaded the correct file. Review that any included figures are legible at standard magnification, with text or symbols inside figures at equal or greater size than figure captions. 

This is an individual assignment. All work you submit should be your own. Make sure to cite any sources you reference, and use quotes and in-line citations to mark any direct quotes.

Late work is not accepted without advance agreement via the extension request process except in cases of medical or family emergencies. In the case of such an emergency, please contact the Dean of Students.

Peer Review

After submission, your journal will be ported to Peer Feedback for review by your classmates. Grading is not the primary function of this peer review process; the primary function is simply to give you the opportunity to read and comment on your classmates’ ideas, and receive additional feedback on your own. All grades will come from the graders alone. See the course participation policy for full details about how points are awarded for completing peer reviews.

 

Mini-Project 1

Mini-Project 1: Sheep & Wolves

In this mini-project, you’ll implement an agent that can solve the Sheep and Wolves problem for an arbitrary number of initial wolves and sheep. You will submit the code for solving the problem to the Mini-Project 1 assignment in Gradescope. You will also submit a report describing your agent to Canvas. Your grade will be based on a combination of your report (50%) and your agent’s performance (50%).

About the Project

The Sheep and Wolves problem is identical to the Guards & Prisoners problem from the lecture, except that it makes more semantic sense why the wolves can be alone (they have no sheep to eat). Ignore for a moment the absurdity of wolves needing to outnumber sheep in order to overpower them. Maybe it’s baby wolves vs. adult rams.

As a reminder, the problem goes like this: you are a shepherd tasked with getting sheep and wolves across a river for some reason. If the wolves ever outnumber the sheep on either side of the river, the wolves will overpower and eat the sheep. You have a boat, which can only take one or two animals in it at a time, and must have at least one animal in it because you’ll get lonely (and because the problem is trivial otherwise). How do you move all the animals from one side of the river to the other?

In the original Sheep & Wolves (or Guards & Prisoners) problem, we specified there were 3 sheep and 3 wolves; here, though, your agent should be able to solve the problem for an arbitrary number of initial sheep and wolves. You may assume that the initial state of the problem will follow those rules (e.g. we won’t give you more wolves than sheep to start). However, not every initial state will be solvable; there may be combinations of sheep and wolves that cannot be solved.

You will return a list of moves that will solve the problem, or an empty list if the problem is unsolvable based on the initial set of Sheep and Wolves. You will also submit a brief report describing your approach.

Your Agent

To write your agent, download the starter code below. Complete the solve() method, then upload it to Gradescope to test it against the autograder. Before the deadline, make sure to select your best performance in Gradescope as your submission to be graded.

Starter Code

Here is your starter code: SemanticNetsAgent.zip.

The starter code contains two files: SemanticNetsAgent.py and main.py. You will write your agent in SemanticNetsAgent.py. You may test your agent by running main.py. You will only submit SemanticNetsAgent.py; you may modify main.py to test your agent with different inputs.

In SemanticNetsAgent.py, your solve() method will have two parameters: the number of sheep and the number of wolves. For example, for the original Sheep & Wolves problem from the lectures, we would call your agent with your_agent.solve(3, 3). You may assume that the initial state is valid (there will not be more Wolves than Sheep in the initial state).

Returning Your Solution

Your solve() method should return a list of moves that will result in the successful solving of the problem. These are only the moves your agent ultimately selected to be performed, not the entire web of possible moves. Each item in the list should be a 2-tuple where each value is an integer representing the number of sheep (the first integer) or wolves (the second integer) to be moved; we assume the moves are alternating. So, if your first move is (1, 1), that means you’re moving one sheep and one wolf to the right. If your second move is (0, 1), that means you’re moving one wolf to the left.

For example, one possible solution to the test case of 3 sheep and 3 wolves would be:

[(1, 1), (1, 0), (0, 2), (0, 1), (2, 0), (1, 1), (2, 0), (0, 1), (0, 2), (0, 1), (0, 2)]

The result of running the moves in order should be (a) that all animals are successfully moved from left to right, and (b) that all intermediate states along the way are valid (wolves never outnumber sheep in any state).

Submitting Your Solution

To submit your agent, go to the course in Canvas and click Gradescope on the left side. Then, select CS7637 if need be.

You will see an assignment named Mini-Project 1. Select this project, then drag your SemanticNetsAgent.py file into the autograder. If you have multiple files, add them to a zip file and drag that zip file into the autograder.

When your submission is done running, you’ll see your results.

How You Will Be Graded

Your agent will be run against 20 initial configurations of sheep and wolves. 7 of these will be the same every time your agent is tested: (1, 1), (2, 2), (3, 3), (5, 3), (6, 3), (7, 3), and (5, 5). The other 13 will be semi-randomly selected, up to 25 of each type of animal, with sheep always greater than or equal to the number of wolves.

You can earn up to 40 points. You will earn 1 point for each of the 20 configurations you solve correctly (meaning that your solution does in fact move all the animals to the right side), and an additional point for each of the 20 configurations you solve optimally (in the minimum number of moves). For every case that you correctly label as unsolvable (by returning an empty list), you will receive 2 points as well.

You may submit up to 40 times prior to the deadline. The large majority of students do not need nearly that many submissions, so do not feel like you should use all 40; this cap is in place primarily to prevent brute force methods for farming information about patterns in hidden test cases or submitting highly random agents hoping for a lucky submission. Note that Gradescope has no way for us to increase your individual number of submissions, so we cannot return submissions to you in the case of errors or other issues, but you should have more than enough submissions to handle errors if they arise.

You must select which of your submissions you want to count for a grade prior to the deadline. Note that by default, Gradescope marks your last submission as your submission to be graded. We cannot automatically select your best submission. Your agent score is worth 50% of your overall mini-project grade.

Your Report

In addition to submitting your agent to Gradescope, you should also write up a short report describing your agent’s design and performance. Your report may be up to 4 pages, and should answer the following questions:

  • How does your agent work? How does it generate new states, and how does it test them?
  • How well does your agent perform? Does it struggle on any particular cases?
  • How efficient is your agent? How does its performance change as the number of animals rises?
  • Does your agent do anything particularly clever to try to arrive at an answer more efficiently?
  • How does your agent compare to a human? Does your agent solve the problem the same way you would?

You are encouraged but not required to include visuals and diagrams in your four page report. The primary goal of the report is to share with your classmates your approach, and to let you see your classmates’ approaches. You may include code snippits if you think they are particularly novel, but please do not include the entirety of your code.

Tip: Remember, we want to see how you put the content of this class into action when designing your agent. You don’t need to use the principles and methods from the lectures precisely, but we want to see your knowledge of the content reflected in your terminology and your reflection.

Submission Instructions

 Complete your assignment using JDF, then save your submission as a PDF. Assignments should be submitted via this Canvas page. You should submit a single PDF for this assignment. This PDF will be ported over to Peer Feedback for peer review by your classmates. If your assignment involves things (like videos, working prototypes, etc.) that cannot be provided in PDF, you should provide them separately (through OneDrive, Google Drive, Dropbox, etc.) and submit a PDF that links to or otherwise describes how to access that material.

After submitting, download your submission from Canvas to verify that you’ve uploaded the correct file. Review that any included figures are legible at standard magnification, with text or symbols inside figures at equal or greater size as figure captions. 

This is an individual assignment. All work you submit should be your own. Make sure to cite any sources you reference, and use quotes and in-line citations to mark any direct quotes.

Late work is not accepted without advance agreement except in cases of medical or family emergencies. In the case of such an emergency, please contact the Dean of Students.Links to an external site.

Grading Information

Your report is worth 50% of your mini-project grade. As such, your report will be graded on a 40-point scale coinciding with a rubric designed to mirror the questions above. Make sure to answer those questions; if any of the questions are irrelevant to the design of your agent, explain why.

Peer Review

After submission, your assignment will be ported to Peer Feedback for review by your classmates. Grading is not the primary function of this peer review process; the primary function is simply to give you the opportunity to read and comment on your classmates’ ideas, and receive additional feedback on your own. All grades will come from the graders alone. See the course participation policy for full details about how points are awarded for completing peer reviews.

Mini-Project 2

Mini-Project 2: Block World

In this mini-project, you’ll implement an agent that can solve Block World problems for an arbitrary initial arrangement of blocks. You will be given an initial arrangement of blocks and a goal arrangement of blocks, and return a list of moves that will transform the initial state into the goal state. You will submit the code for solving the problem to the Mini-Project 2 assignment in Gradescope. You will also submit a report describing your agent to Canvas. Your grade will be based on a combination of your report (50%) and your agent’s performance (50%).

About the Project

In a Block World problem, you are given an original arrangement of blocks and a target arrangement of blocks, like this:

blockworld.png

For us, blocks will be identified as single letters from A to Z.

Blocks may be moved one at a time. A block may not be moved if there is another block on top of it. Blocks may be placed either on the table or on top of another block. Your goal is to generate a list of moves that will turn the initial state into the goal state. In the example above, that could be: Move D to the table, move B to A, move C to D, move B to C, and move A to B.

There may be more than one sequence of moves that can accomplish the goal. If so, your goal is to generate the smallest number of moves that  will turn the initial state into the goal state.

Your Agent

To write your agent, download the starter code below. Complete the solve() method, then upload it to Gradescope to test it against the autograder. Before the deadline, make sure to select your best performance in Gradescope as your submission to be graded.

Starter Code

Here is your starter code: BlockWorldAgent.zip.

The starter code contains two files: BlockWorldAgent.py and main.py. You will write your agent in BlockWorldAgent.py. You may test your agent by running main.py. You will only submit BlockWorldAgent.py; you may modify main.py to test your agent with different inputs.

In BlockWorldAgent.py, your solve() method will have two parameters: the initial configuration of blocks, and the goal configuration of blocks. Configurations will be represented by lists of lists of characters, where each character represents a different block (e.g. “A” would be Block A). Within each list, each subsequent block is on top of the previous block in the list; the first block in the list is on the table. For example, this list would represent the configuration shown above: two stacks, one with D on B and B on C, and the other with just A:

[["C", "B", "D"], ["A"]]

There may be up to 26 blocks in a puzzle. You may assume that the goal configuration contains all the blocks and only the blocks present in the initial configuration.

Returning Your Solution

Your solve() method should return a list of moves that will convert the initial state into the goal state. Each move should be a 2-tuple. The first item in each 2-tuple should be what block is being moved, and the second item should be where it is being moved to—either the name of another block or “Table” if it is to be put into a new pile.

For example, imagine the following initial and target state:

Initial: [["A", "B", "C"], ["D", "E"]]
Goal: [["A", "C"], ["D", "E", "B"]]

Put in simple terms, the goal here is to move Block B from the middle of the pile on the left and onto the top of the pile on the right.

Given that, this sequence of moves would be an acceptable solution:

("C", "Table")
("B", "E")
("C", "A")

Submitting Your Solution

To submit your agent, go to the course in Canvas and click Gradescope on the left side. Then, select CS7637 if need be.

You will see an assignment named Mini-Project 2. Select this project, then drag your BlockWorldAgent.py file into the autograder. If you have multiple files, add them to a zip file and drag that zip file into the autograder.

When your submission is done running, you’ll see your results.

How You Will Be Graded

Your agent will be run against 20 pairs of initial and goal configurations. 8 of these will be the same every time your agent is tested; these are present in the original BlockWorldAgent.py file. The remaining 12 will be randomly generated, with up to 26 blocks each.

You can earn up to 40 points. You will earn 1 point for each of the 20 pairs of configurations you solve correctly (meaning that your solution does in fact transform the initial state into the goal state), and an additional point for each of the 20 configurations you solve optimally (in the minimum number of moves).

You may submit up to 40 times prior to the deadline. The large majority of students do not need nearly that many submissions, so do not feel like you should use all 40; this cap is in place primarily to prevent brute force methods for farming information about patterns in hidden test cases or submitting highly random agents hoping for a lucky submission. Note that Gradescope has no way for us to increase your individual number of submissions, so we cannot return submissions to you in the case of errors or other issues, but you should have more than enough submissions to handle errors if they arise.

You must select which of your submissions you want to count for a grade prior to the deadline. Note that by default, Gradescope marks your last submission as your submission to be graded. We cannot automatically select your best submission. Your agent score is worth 50% of your overall mini-project grade.

Your Report

In addition to submitting your agent to Gradescope, you should also write up a short report describing your agent’s design and performance. Your report may be up to 4 pages, and should answer the following questions:

  • How does your agent work? Does it use Generate & Test? Means-Ends Analysis? Some other approach?
  • How well does your agent perform? Does it struggle on any particular cases?
  • How efficient is your agent? How does its performance change as the number of blocks?
  • Does your agent do anything particularly clever to try to arrive at an answer more efficiently?
  • How does your agent compare to a human? Does your agent solve the problem the same way you would?

You are encouraged but not required to include visuals and diagrams in your four page report. The primary goal of the report is to share with your classmates your approach, and to let you see your classmates’ approaches. You may include code snippits if you think they are particularly novel, but please do not include the entirety of your code.

Tip: Remember, we want to see how you put the content of this class into action when designing your agent. You don’t need to use the principles and methods from the lectures precisely, but we want to see your knowledge of the content reflected in your terminology and your reflection.

Submission Instructions

Complete your assignment using JDF format, then save your submission as a PDF. Assignments should be submitted via this Canvas page. You should submit a single PDF for this assignment. This PDF will be ported over to Peer Feedback for peer review by your classmates. If your assignment involves things (like videos, working prototypes, etc.) that cannot be provided in PDF, you should provide them separately (through OneDrive, Google Drive, Dropbox, etc.) and submit a PDF that links to or otherwise describes how to access that material.

After submitting, download your submission from Canvas to verify that you’ve uploaded the correct file. Review that any included figures are legible at standard magnification, with text or symbols inside figures at equal or greater size than figure captions. 

This is an individual assignment. All work you submit should be your own. Make sure to cite any sources you reference, and use quotes and in-line citations to mark any direct quotes.

Late work is not accepted without advance agreement except in cases of medical or family emergencies. In the case of such an emergency, please contact the Dean of Students.

Grading Information

Your report is worth 50% of your mini-project grade. As such, your report will be graded on a 40-point scale coinciding with a rubric designed to mirror the questions above. Make sure to answer those questions; if any of the questions are irrelevant to the design of your agent, explain why.

Peer Review

After submission, your assignment will be ported to Peer Feedback for review by your classmates. Grading is not the primary function of this peer review process; the primary function is simply to give you the opportunity to read and comment on your classmates’ ideas, and receive additional feedback on your own. All grades will come from the graders alone. See the course participation policy for full details about how points are awarded for completing peer reviews.

Mini-Project 3

Mini-Project 3: Sentence Reading

In this project, you’ll implement an agent that can answer simple questions about simple sentences made from the 500 most common words in the English language, as well as a set of 20 possible names and properly-formatted times. Your agent will be given a sentence and a question, and required to return an answer to the question; the answer will always be a word from the sentence. You will submit the code for answering these questions to the Mini-Project 3 assignment in Gradescope. You will also submit a report describing your agent to Canvas. Your grade will be based on a combination of your report (50%) and your agent’s performance (50%).

About the Project

In this project, you’ll be given pairs of sentences and questions. Your agent should read the sentence, read the question, and return an answer to the question based on the knowledge contained in the sentences. Importantly, while this is a natural language processing-themed project, you won’t be using any existing libraries; our goal here is for you to understand the low-level reasoning of NLP, not merely put existing libraries to work.

To keep things relatively reasonable, your agent will only be required to answer questions about the 500 most common words in the English language, as well as a list of 20 possible names. Your agent should also be able to interpret clock times: you may assume these will always be HH:MM(AM/PM) or simply HH:MM. For example, 9:00AM, 11:00, or 12:34PM.

Because there are disagreements on what the most common words are, we’ve given you our own list of the 500 most common words for our purposes, along with the 20 names your agent should recognize: these are contained in the file mostcommon.txt.

Your Agent

To write your agent, download the starter code below. Complete the solve() method, then upload it to Gradescope to test it against the autograder. Before the deadline, make sure to select your best performance in Gradescope as your submission to be graded.

Starter Code

Here is your starter code (and the mostcommon.txt file): SentenceReadingAgent.zip.

The starter code contains two files: SentenceReadingAgent.py and main.py. You will write your agent in SentenceReadingAgent.py. You may test your agent by running main.py. You will only submit SentenceReadingAgent.py; you may modify main.py to test your agent with different inputs.

You can use a library like spacy (https://spacy.io/usage/linguistic-features) to preprocess the mostcommon.txt file. There are others that could be used but you must use them in preprocessing only. You cannot import the library into Gradescope. You must include whatever preprocessing you’ve done into your SentenceReadingAgent.py. Do not use another file (.txt or .csv).

The mostcommon.txt contains all the words you will need, but depending on the preprocessing step not all libraries will lex the file the same and you are encouraged to expand these in your agents knowledge representation.

Your solve() method will have two parameters: a string representing a sentence to read, and a string representing a question to answer. Both will contain only the 500 most common words, the names listed in that file, and/or clock times. The only punctuation will be the apostrophe (e.g. dog’s) or the last character in the string (either a period for the sentence or a question mark for the question).

For example, an input sentence could be:

  • “Ada brought a short note to Irene.”

Questions about that sentence might include:

  • “Who brought the note?” (“Ada”)
  • “What did Ada bring?” (“note” or “a note”)
  • “Who did Ada bring the note to?” (“Irene”)
  • “How long was the note?” (“short”)

Another input sentence could be:

  • “David and Lucy walk one mile to go to school every day at 8:00AM when there is no snow.”

Questions about that sentence might include:

  • “Who does Lucy go to school with?” (“David”)
  • “Where do David and Lucy go?” (“school”)
  • “How far do David and Lucy walk?” (“mile” or “one mile”)
  • “How do David and Lucy get to school?” (“walk”)
  • “At what time do David and Lucy walk to school?” (“8:00AM”)

You may assume that this second example will be the upper limit of complexity you may see in our sentences.

Returning Your Solution

Your solve() method should return an answer to the question as a string. You may assume every question will be answerable by a single word from the original sentence, although we may accept multi-word answers as well (such as accepting “mile” and “one mile” above).

Submitting Your Solution

To submit your agent, go to the course in Canvas and click Gradescope on the left side. Then, select CS7637 if need be.

You will see an assignment named Mini-Project 3. Select this project, then drag your SentenceReadingAgent.py file into the autograder. If you have multiple files, add them to a zip file and drag that zip file into the autograder.

When your submission is done running, you’ll see your results.

How You Will Be Graded

Your agent will be run against 20 question-answer pairs. The first nine will always be the same; these are the nine contained within the main.py file provided above. The remaining 11 will be randomly selected from a large library of sentence-question pairs.

You can earn up to 40 points. You will earn 2 points for each of the 20 questions you answer correctly.

You may submit up to 40 times prior to the deadline. The large majority of students do not need nearly that many submissions, so do not feel like you should use all 40; this cap is in place primarily to prevent brute force methods for farming information about patterns in hidden test cases or submitting highly random agents hoping for a lucky submission. Note that Gradescope has no way for us to increase your individual number of submissions, so we cannot return submissions to you in the case of errors or other issues, but you should have more than enough submissions to handle errors if they arise.

You must select which of your submissions you want to count for a grade prior to the deadline. Note that by default, Gradescope marks your last submission as your submission to be graded. We cannot automatically select your best submission. Your agent score is worth 50% of your overall mini-project grade.

Your Report

In addition to submitting your agent to Gradescope, you should also write up a short report describing your agent’s design and performance. Your report may be up to 4 pages, and should answer the following questions:

  • How does your agent work? Does it use some concepts covered in our course? Or some other approach?
  • How well does your agent perform? Does it struggle on any particular cases?
  • How efficient is your agent? How does its performance change as the sentence complexity grows?
  • Does your agent do anything particularly clever to try to arrive at an answer more efficiently?
  • How does your agent compare to a human? Do you feel people interpret the questions similarly?

You are encouraged but not required to include visuals and diagrams in your four page report. The primary goal of the report is to share with your classmates your approach, and to let you see your classmates’ approaches. You may include code snippits if you think they are particularly novel, but please do not include the entirety of your code.

Tip: Remember, we want to see how you put the content of this class into action when designing your agent. You don’t need to use the principles and methods from the lectures precisely, but we want to see your knowledge of the content reflected in your terminology and your reflection.

Submission Instructions

Complete your assignment using JDF format, then save your submission as a PDF. Assignments should be submitted via this Canvas page. You should submit a single PDF for this assignment. This PDF will be ported over to Peer Feedback for peer review by your classmates. If your assignment involves things (like videos, working prototypes, etc.) that cannot be provided in PDF, you should provide them separately (through OneDrive, Google Drive, Dropbox, etc.) and submit a PDF that links to or otherwise describes how to access that material.

After submitting, download your submission from Canvas to verify that you’ve uploaded the correct file. Review that any included figures are legible at standard magnification, with text or symbols inside figures at equal or greater size than figure captions. 

This is an individual assignment. All work you submit should be your own. Make sure to cite any sources you reference, and use quotes and in-line citations to mark any direct quotes.

Late work is not accepted without advance agreement except in cases of medical or family emergencies. In the case of such an emergency, please contact the Dean of Students.

Grading Information

Your report is worth 50% of your mini-project grade. As such, your report will be graded on a 40-point scale coinciding with a rubric designed to mirror the questions above. Make sure to answer those questions; if any of the questions are irrelevant to the design of your agent, explain why.

Peer Review

After submission, your assignment will be ported to Peer Feedback for review by your classmates. Grading is not the primary function of this peer review process; the primary function is simply to give you the opportunity to read and comment on your classmates’ ideas, and receive additional feedback on your own. All grades will come from the graders alone. See the course participation policy for full details about how points are awarded for completing peer reviews.

Mini-Project 4

Mini-Project 4: Monster Identification

In this project, you’ll implement an agent that will learn a definition of a particular monster species from a list of positive and negative samples, and then make a determination about whether a newly-provided sample is an instance of that monster species or not. You will submit the code for identifying these monsters to the Mini-Project 4 assignment in Gradescope. You will also submit a report describing your agent to Canvas. Your grade will be based on a combination of your report (50%) and your agent’s performance (50%).

About the Project

For the purposes of this project, every monster has a value for each of twelve parameters. The possible values are all known. The parameters and their possible values are:

  • size: tiny, small, medium, large, huge
  • color: black, white, brown, gray, red, yellow, blue, green, orange, purple
  • covering: fur, feathers, scales, skin
  • foot-type: paw, hoof, talon, foot, none
  • leg-count: 0, 1, 2, 3, 4, 5, 6, 7, 8
  • arm-count: 0, 1, 2, 3, 4, 5, 6, 7, 8
  • eye-count: 0, 1, 2, 3, 4, 5, 6, 7, 8
  • horn-count: 0, 1, 2
  • lays-eggs: true, false
  • has-wings: true, false
  • has-gills: true, false
  • has-tail: true, false

A single monster will be defined as a dictionary with those 12 keys. Each value will be one of the values from the corresponding list. The values associated with size, color, covering, and foot-type will be strings; with leg-count, arm-count, eye-count, and horn-count will be integers; and with lays-eggs, has-wings, has-gills, and has-tail will be booleans.

You will be given a list of monsters in the form of a list of dictionaries, each of which has those twelve keys and one of the listed values. Each monster will be labeled as either True (an instance of the species of monster we are currently looking at) or False (not an instance of the species of monster we are currently looking at). You will also be given a single unlabeled monster; your goal is to return a prediction—True or False—of whether the unlabeled monster is an instance of the species of monster defined by the labeled list.

Your Agent

To write your agent, download the starter code below. Complete the solve() method, then upload it to Gradescope to test it against the autograder. Before the deadline, make sure to select your best performance in Gradescope as your submission to be graded.

Starter Code

Here is your starter code: MonsterClassificationAgent.zip.

The starter code contains two files: MonsterClassificationAgent.py and main.py. You will write your agent in MonsterClassificationAgent.py. You may test your agent by running main.py. You will only submit MonsterClassificationAgent.py; you may modify main.py to test your agent with different inputs.

Your solve() method will have two parameters. The first will be a list of 2-tuples. The first item in each 2-tuple will be a dictionary representing a single monster. The second item in each 2-tuple will be a boolean representing whether that particular monster is an example of this new monster species. The second parameter to solve() will be a dictionary representing the unlabeled monster.

Each monster species might have multiple possible values for each of the above parameters. One monster species, for instance, include monsters with either 1 or 2 horns, but never 0. Another species might include monsters that can be red, blue, and yellow, but no other colors. Another species might include both monsters with and without wings. So, while each monster is defined by a single value for each parameter, the species as a whole may have more variation.

Returning Your Solution

Your solve() method should return True or False based on whether your function believes this new monster (the second parameter) to be an example of the species defined by the labeled list of monsters (the first parameters).

Not every list will be fully exhaustive. Your second parameter could, for example, feature a monster that is a color that never appeared as positive or negative in the list of samples. Your agent’s task is to make an educated guess. For example, you might determine, “The only difference between this monster and the positive examples is its color, and its color never appeared in the negative examples, therefore there is a good likelihood that this is still a positive example.”

You may assume that the parameters are independent; for example, you will not have any species that has one horn when yellow and two horns when blue, but never one horn when blue. You may assume that all parameters are equally likely to occur; for example, you will not have any species that is yellow 90% of the time and blue only 10% of the time. Those ratios may appear in the list of samples you receive, but the underlying distribution of possibilities will be even. You may assume that these parameters are all that there is: if two monsters have the exact same parameters, they are guaranteed to be the same species. Finally, you should assume that each list is independent: you should not use knowledge from a prior test case to inform the current one.

Submitting Your Solution

To submit your agent, go to the course in Canvas and click Gradescope on the left side. Then, select CS7637 if need be.

You will see an assignment named Mini-Project 4. Select this project, then drag your MonsterClassificationAgent.py file into the autograder. If you have multiple files, add them to a zip file and drag that zip file into the autograder.

When your submission is done running, you’ll see your results.

How You Will Be Graded

Your agent will run against 20 test cases. The first four of these will always be the same; these are those contained in the original main.py. The last 16 will be randomly generated.

You can earn up to 40 points. Because the list of labeled monsters is non-exhaustive, it is highly unlikely you can write an agent that classifies every single monster correctly; there will always be some uncertainty. For that reason, you will receive full credit if your agent correctly classifies 17 or more of the monsters. Similarly, because every label is a simple true/false, even a randomly performing agent can likely get 50% correct with no intelligence under the hood. For that reason, you will receive no credit if your agent correctly classifies 7 or fewer monsters.

Between 7 and 17, you will receive 4 points for each correct classification: 4 points for 8/20, 8 for 9/20; 12 for 10/20; and so on, up to 40 points for correctly classifying 17 out of 20 or better.

You may submit up to 40 times prior to the deadline. The large majority of students do not need nearly that many submissions, so do not feel like you should use all 40; this cap is in place primarily to prevent brute force methods for farming information about patterns in hidden test cases or submitting highly random agents hoping for a lucky submission. Note that Gradescope has no way for us to increase your individual number of submissions, so we cannot return submissions to you in the case of errors or other issues, but you should have more than enough submissions to handle errors if they arise.

You must select which of your submissions you want to count for a grade prior to the deadline. Note that by default, Gradescope marks your last submission as your submission to be graded. We cannot automatically select your best submission. Your agent score is worth 50% of your overall mini-project grade.

Your Report

In addition to submitting your agent to Gradescope, you should also write up a short report describing your agent’s design and performance. Your report may be up to 4 pages, and should answer the following questions:

  • How does your agent work? Does it use some concepts covered in our course? Or some other approach?
  • How well does your agent perform? Does it struggle on any particular cases?
  • How efficient is your agent? How does its performance change as the number of labeled monsters grows?
  • Does your agent do anything particularly clever to try to arrive at an answer more efficiently?
  • How does your agent compare to a human? Do you feel people approach the problem similarly?

You are encouraged but not required to include visuals and diagrams in your four page report. The primary goal of the report is to share with your classmates your approach, and to let you see your classmates’ approaches. You may include code snippits if you think they are particularly novel, but please do not include the entirety of your code.

Tip: Remember, we want to see how you put the content of this class into action when designing your agent. You don’t need to use the principles and methods from the lectures precisely, but we want to see your knowledge of the content reflected in your terminology and your reflection.

Submission Instructions

Complete your assignment using JDF format, then save your submission as a PDF. Assignments should be submitted via this Canvas page. You should submit a single PDF for this assignment. This PDF will be ported over to Peer Feedback for peer review by your classmates. If your assignment involves things (like videos, working prototypes, etc.) that cannot be provided in PDF, you should provide them separately (through OneDrive, Google Drive, Dropbox, etc.) and submit a PDF that links to or otherwise describes how to access that material.

After submitting, download your submission from Canvas to verify that you’ve uploaded the correct file. Review that any included figures are legible at standard magnification, with text or symbols inside figures at equal or greater size than figure captions. 

This is an individual assignment. All work you submit should be your own. Make sure to cite any sources you reference, and use quotes and in-line citations to mark any direct quotes.

Late work is not accepted without advance agreement except in cases of medical or family emergencies. In the case of such an emergency, please contact the Dean of Students.

Grading Information

Your report is worth 50% of your mini-project grade. As such, your report will be graded on a 40-point scale coinciding with a rubric designed to mirror the questions above. Make sure to answer those questions; if any of the questions are irrelevant to the design of your agent, explain why.

Peer Review

After submission, your assignment will be ported to Peer Feedback for review by your classmates. Grading is not the primary function of this peer review process; the primary function is simply to give you the opportunity to read and comment on your classmates’ ideas, and receive additional feedback on your own. All grades will come from the graders alone. See the course participation policy for full details about how points are awarded for completing peer reviews.

Mini-Project 5

Mini-Project 5: Monster Diagnosis

In this project, you’ll implement an agent that can diagnose monster diseases. Based on a list of diseases and their ailments and a list of elevated and reduced vitamin levels, you will diagnosis the disease(s) affecting a particular monster. You will submit the code for diagnosing these monsters to the Mini-Project 5 assignment in Gradescope. You will also submit a report describing your agent to Canvas. Your grade will be based on a combination of your report (50%) and your agent’s performance (50%).

About the Project

Monster physiology relies on a balance of 26 vitamins, conveniently named Vitamin A through Vitamin Z. Different monster ailments can cause elevated or reduced levels for different vitamins. Unfortunately, every ailment affects every monster species differently, so there is no canonical list of all monster ailments and their effects; instead, the ailments must be interpreted in the context of a particular species.

In this project, you’ll diagnose a particular monster based on its symptoms and a list of ailments and their effects on that monster species. Both the symptoms of an ailment and a monster’s symptoms will be represented by a dictionary, where the keys are the different vitamin letters and the values are either + (for elevated), – (for reduced), or 0 (for normal). For example:

{"A": "+", "B": "0", "C": "-", "D": "0", "E": 0, "F": "+", …}

This would represent elevated levels of Vitamins A and F, and a reduced level of Vitamin C. This could be the symptoms of a particular patient (e.g. “Sully presented with elevated levels of Vitamins A and F, and a reduced level of Vitamin C), or it could be the symptoms of a particular disease (e.g. “Alphaitis causes elevated Vitamins A and F and a reduced Vitamin C in monsters like Sully”).

Your Agent

To write your agent, download the starter code below. Complete the solve() method, then upload it to Gradescope to test it against the autograder. Before the deadline, make sure to select your best performance in Gradescope as your submission to be graded.

Starter Code

Here is your starter code: MonsterDiagnosisAgent.zip.

The starter code contains two files: MonsterDiagnosisAgent.py and main.py. You will write your agent in MonsterDiagnosisAgent.py. You may test your agent by running main.py. You will only submit MonsterDiagnosisAgent.py; you may modify main.py to test your agent with different inputs.

Your solve() method will have two parameters. The first will be a list of diseases and their symptoms. This will be provided as a dictionary, where the keys are the names of diseases and their values are each a dictionary representing its symptoms. For example:

{"Alphaitis": {"A": "+", "B": "0", "C": "-", "D": "0", "E": "0", "F": "+", ...}, "Betatosis": {"A": "0", "B": "+", "C": "-", "D": "0", "E": "+", "F": "-", ...}, "Gammanoma": {"A": "0", "B": "0", "C": "+", "D": "+", "E": "+", "F": "+", …}, ...]

There may be up to 24 diseases. Each disease will have values for all 26 vitamins. Most vitamins will be unaffected by any particular disease; most diseases only affect 3-6 vitamins.

The second parameter to the function will be a particular set of symptoms, given as a dictionary, such as:

{"A": "+", "B": "0", "C": "-", "D": "0", "E": 0, "F": "+", …}

Your goal is to identify the smallest subset of diseases from the list of ailments that can explain the monster’s symptoms.

If the patient has two diseases with opposite effects, they cancel each other out. For example, if a patient had both Alphaitis and Betatosis (according to the definitions above), they would have a normal level of Vitamin F because Alphaitis elevates F and Betatosis reduces F.

If the patient has two diseases with the same effect, their effect remains the same. For example, if a patient had both Alphaitis and Betatosis (according to the definitions above), they would have a reduced level of Vitamin C because both diseases reduce Vitamin C. There is no extra effect from having multiple diseases with the same effect.

If a patient has more than two diseases, then each Vitamin moves in whichever direction is caused by the largest number of diseases. For example, if a patient had Alphaitis, Betatosis, and Gammanoma, they would exhibit reduced levels of Vitamin C: both Alphaitis and Betatosis reduce Vitamin C, while Gammanoma elevates it. Two reductions plus one elevation leads to a reduction. If, on the other hand, they had four diseases, two of which reduced Vitamin C and two of which elevated Vitamin C, their Vitamin C levels would be normal.

Returning Your Solution

Your solve() method should return a list of strings. Each string should be the name of one of the (up to) 24 diseases. Together, the diseases should explain all of the symptoms of the patient. If there are multiple sets of diseases that can explain all the symptoms, then you should return the set with the minimum number of diseases according to the principle of parsimony. For this project, you may assume that all diseases are equally likely and that all symptoms will be covered.

For the two test cases in the starter code, the answers should be: [“Alphaitis”, “Betatosis”] and [“Gammanoma”, “Deltaccol”, “Epsicusus”].

Submitting Your Solution

To submit your agent, go to the course in Canvas and click Gradescope on the left side. Then, select CS7637 if need be.

You will see an assignment named Mini-Project 5. Select this project, then drag your MonsterDiagnosisAgent.py file into the autograder. If you have multiple files, add them to a zip file and drag that zip file into the autograder.

When your submission is done running, you’ll see your results.

How You Will Be Graded

Your agent will run against 20 test cases. The first two of these will always be the same; these are those contained in the original main.py. The last 18 will be randomly generated.

You can earn up to 40 points. You will earn one point for each of the test cases for which you correctly identify a list of diseases that explain all the symptoms. You will earn an additional point for each of the test cases for which you identify the smallest list of diseases that explain all the symptoms.

You may submit up to 40 times prior to the deadline. The large majority of students do not need nearly that many submissions, so do not feel like you should use all 40; this cap is in place primarily to prevent brute force methods for farming information about patterns in hidden test cases or submitting highly random agents hoping for a lucky submission. Note that Gradescope has no way for us to increase your individual number of submissions, so we cannot return submissions to you in the case of errors or other issues, but you should have more than enough submissions to handle errors if they arise.

You must select which of your submissions you want to count for a grade prior to the deadline. Note that by default, Gradescope marks your last submission as your submission to be graded. We cannot automatically select your best submission. Your agent score is worth 50% of your overall mini-project grade.

Your Report

In addition to submitting your agent to Gradescope, you should also write up a short report describing your agent’s design and performance. Your report may be up to 4 pages, and should answer the following questions:

  • How does your agent work? Does it use some concepts covered in our course? Or some other approach?
  • How well does your agent perform? Does it struggle on any particular cases?
  • How efficient is your agent? How does its performance change as the number of diseases grows?
  • Does your agent do anything particularly clever to try to arrive at an answer more efficiently?
  • How does your agent compare to a human? Do you feel people approach the problem similarly?

You are encouraged but not required to include visuals and diagrams in your four page report. The primary goal of the report is to share with your classmates your approach, and to let you see your classmates’ approaches. You may include code snippits if you think they are particularly novel, but please do not include the entirety of your code.

Tip: Remember, we want to see how you put the content of this class into action when designing your agent. You don’t need to use the principles and methods from the lectures precisely, but we want to see your knowledge of the content reflected in your terminology and your reflection.

Submission Instructions

Complete your assignment using JDF format, then save your submission as a PDF. Assignments should be submitted via this Canvas page. You should submit a single PDF for this assignment. This PDF will be ported over to Peer Feedback for peer review by your classmates. If your assignment involves things (like videos, working prototypes, etc.) that cannot be provided in PDF, you should provide them separately (through OneDrive, Google Drive, Dropbox, etc.) and submit a PDF that links to or otherwise describes how to access that material.

After submitting, download your submission from Canvas to verify that you’ve uploaded the correct file. Review that any included figures are legible at standard magnification, with text or symbols inside figures at equal or greater size than figure captions. 

This is an individual assignment. All work you submit should be your own. Make sure to cite any sources you reference, and use quotes and in-line citations to mark any direct quotes.

Late work is not accepted without advance agreement except in cases of medical or family emergencies. In the case of such an emergency, please contact the Dean of Students.

Grading Information

Your report is worth 50% of your mini-project grade. As such, your report will be graded on a 40-point scale coinciding with a rubric designed to mirror the questions above. Make sure to answer those questions; if any of the questions are irrelevant to the design of your agent, explain why.

Peer Review

After submission, your assignment will be ported to Peer FeedbackLinks to an external site. for review by your classmates. Grading is not the primary function of this peer review process; the primary function is simply to give you the opportunity to read and comment on your classmates’ ideas, and receive additional feedback on your own. All grades will come from the graders alone. See the course participation policy for full details about how points are awarded for completing peer reviews.

Homework 1

Homework 1: Semantic Networks with the Ring

Answer the following prompt in a maximum of 5 pages (excluding references) in JDF format. Any content beyond 5 pages will not be considered for grading. 5 pages is a maximum, not a target; this length is intentionally set expecting that your submission may include diagrams, figures, pictures, etc. These should be incorporated into the body of the paper.

If you would like to include additional information beyond the page limit, you may include it in clearly-marked appendices. These materials will not be used in grading your assignment, but they may help you get better feedback from your classmates and grader.

Homework 1 Prompt

Ever since the One Ring was forged by the Dark Lord Sauron, few have been able to resist its seductive power. Gollum was famously corrupted by it, and during his journey to destroy it, even Frodo was tempted by it. One of the few to be able to resist the call of its power was Samwise (Sam) Gamgee.

At one point during their journey to destroy the ring, Frodo, Sam, and Gollum arrive at a river. The river is too wide and deep to swim across, but fortunately, there is a small raft docked on one side. Frodo is too tired from the burden of carrying the ring to drive the raft, however, and Gollum is too unreliable to steer. So, only Sam can steer the raft.

The raft is only large enough for two people at a time; if all three try to ride it, they will sink. So, Sam will have to take Gollum and Frodo across one at a time. However, neither Gollum nor Frodo can be trusted alone with the One Ring on either side of the river by himself. In fact, neither of them can be trusted alone on the raft with Sam and the ring: its allure is too great, and if given the opportunity, they will simply shove Sam off the side and keep the ring. Therefore, if either Frodo or Gollum is on the raft with Sam, the ring must be left on one of the riverbanks; if the ring is on the raft, then only Sam can be on the raft with it.

How can Frodo, Sam, Gollum, and the ring get across the river under these conditions? Specifically, neither Frodo nor Gollum can ever be alone with the ring on either bank of the river by himself, nor can Frodo or Gollum be on the raft with the ring. If the ring is on the raft, then only Sam can be on the raft with it. Obviously, Sam is permitted to steer the raft alone.

First, in a figure, construct one semantic network representing two states with a transition between them. This should occupy approximately half a page. Make sure to include all components of the states, as well as all components required to represent the transition.

Second, in another figure, start with the problem’s initial state and then apply generate & test to build out a complete semantic network in order to solve the problem. In applying generate & test, your generator should be smart enough to make only valid moves (e.g. it will not try to move Gollum and the ring together at once, or make consecutive moves in the same direction across the river without Sam bringing the raft back), but it should not be smart enough to only make moves that result in valid states (e.g., it should still try to move Frodo first, even though that move results in an invalid state, specifically Gollum alone with the ring). Your tester, in turn, should check each generated state to see if (a) it follows the rules, and (b) if it has met the goal. You may decide whether identifying states that have already been visited is the responsibility of the generator or the tester.

Include the entire semantic network that solves this problem. Clearly indicate which states the tester failed, and why. The semantic network should explore the entire problem space: every state should be either ruled out or have its following states explored. We expect this will fit on one page: you may create a more space-efficient representation if need be. As long as it is legible, you may also hand-write the network and insert it as an image into your paper (neatness is expected).

For the purposes of this question, please ignore how foolhardy it would be to leave a ring of such immense power alone on the side of a river, as well as the fact that Frodo would never willingly permit Sam to carry the burden of the ring.

Ensure your submission is clear, concise, and legible, reflecting your grasp of the lecture concepts (semantic network, transition state, generate and test).

Submission Instructions 

Complete your assignment using JDF format, then save your submission as a PDF. Assignments should be submitted via this Canvas page. You should submit a single PDF for this assignment. This PDF will be ported over to Peer Feedback for peer review by your classmates. If your assignment involves things (like videos, working prototypes, etc.) that cannot be provided in PDF, you should provide them separately (through OneDrive, Google Drive, Dropbox, etc.) and submit a PDF that links to or otherwise describes how to access that material.

After submitting, download your submission from Canvas to verify that you’ve uploaded the correct file. Review that any included figures are legible at standard magnification, with text or symbols inside figures at equal or greater size than figure captions. 

This is an individual assignment. All work you submit should be your own. Make sure to cite any sources you reference, and use quotes and in-line citations to mark any direct quotes.

Late work is not accepted without advance agreement via the extension request process except in cases of medical or family emergencies. In the case of such an emergency, pleasecontact the Dean of Students.

Grading Information

Your assignment will be graded on a 10-point scale coinciding with a rubric designed to mirror the question structure. Make sure to answer every question posted by the prompt. Pay special attention to bolded words and question marks in the question text. For further information on how the assignment is graded, see the rubric in Canvas.

Peer Review

After submission, your assignment will be ported to Peer Feedback for review by your classmates. Grading is not the primary function of this peer review process; the primary function is simply to give you the opportunity to read and comment on your classmates’ ideas, and receive additional feedback on your own. All grades will come from the graders alone. See the course participation policy for full details about how points are awarded for completing peer reviews.

Homework 2

Homework 2: Learning a Concept/Model (Or, To Be or Not to Be Soup)

Answer the following prompt in a maximum of 5 pages (excluding references) in JDF format. Any content beyond 5 pages will not be considered for grading. 5 pages is a maximum, not a target; this length is intentionally set expecting that your submission may include diagrams, figures, pictures, etc. These should be incorporated into the body of the paper.

If you would like to include additional information beyond the page limit, you may include it in clearly-marked appendices. These materials will not be used in grading your assignment, but they may help you get better feedback from your classmates and grader.

Homework 2 Prompt

For several semesters, this prompt considered one of the great internet debates of all time: what is a sandwich? Unfortunately, that debate is so well developed that emerging AI agents can actually give extremely compelling answers to these questions.

So, instead, this assignment will look at a slightly different debate: what is soup? You will explore what it means to learn the concept of soup through top-down and bottom-up processing, incremental concept learning, classification, and an example with case based reasoning.

First, categorize them yourself by taking the list of dishes below and deciding whether each one is a soup. In your assignment, list which of these you would consider to be soup and which of these you would consider not to be soup. If you are unfamiliar with any of these types of soup (or not-soup), you may use your favorite search engine to find out what they are.

  • Baked beans
  • Borscht
  • Cereal with milk
  • Chicken broth
  • Chicken noodle soup
  • Chicken pot pie
  • Chili
  • Chocolate pudding
  • Clam chowder
  • Coconut milk
  • Consommé
  • Corn chowder
  • Crème brûlée
  • French onion soup in a bread bowl
  • Fruit salad in syrup
  • Gazpacho
  • Guacamole
  • Gumbo
  • Hot chocolate with marshmallows
  • Hot tea with tea leaves
  • Ice cream sundae
  • Iced tea
  • Jambalaya
  • Macaroni and cheese
  • Massaman curry
  • Mashed potatoes
  • Melted ice cream
  • Menudo
  • Milkshake
  • Miso soup
  • Oatmeal
  • Pasta bolognese
  • Pho
  • Rice pudding
  • Risotto
  • Spaghetti with marinara sauce
  • Stew
  • Tomato bisque
  • Vichyssoise
  • Yogurt with granola

Second, after labeling each of these 40 dishes as either soup or not soup, visually illustrate the process of incremental concept learning using a sequential series of potential soups. Construct a model, similar to one presented in the lectures, of what a soup is, noting which heuristics are used to specialize and generalize the model with each additional positive or negative example. Step through the process with at least six potential soups, at least three positive and three negative examples and visually illustrate each example. Then, briefly note whether any of the soups you did not include would significantly change the model if you had added it to the model next.

Next, attempt a classification approach to defining a soup. Select five parameters (similar to “Lays eggs?” and “Has wings?” from the bird example in the Classification lecture) that would be useful in differentiating soups. Then, define values for those parameters for at least six dishes. You are encouraged to format this as a table and to include a column showing whether you had labeled these six foods as soup or not soup in the first step of this homework assignment. Next, use these values to construct an abstracted classification tree definition of what a soup is. The format of this tree can be based on your own intuition. Then use your classification tree to classify at least ten other dishes from the list above as soup or not soup.

Finally, answer the age-old question, “Are grits soup?” using each of these three perspectives: the model you developed through incremental concept learning; the classifier you developed based on those parameters and their values; and a case-based reasoning approach. For case-based reasoning, you only need to state which soup (or non-soup) you think is the most “similar” case to grits.

Ensure your submission demonstrates your understanding of the concepts from the lectures and is clear, concise, neat, and legible.

Submission Instructions

Complete your assignment using JDF format, then save your submission as a PDF. Assignments should be submitted via this Canvas page. You should submit a single PDF for this assignment. This PDF will be ported over to Peer Feedback for peer review by your classmates. If your assignment involves things (like videos, working prototypes, etc.) that cannot be provided in PDF, you should provide them separately (through OneDrive, Google Drive, Dropbox, etc.) and submit a PDF that links to or otherwise describes how to access that material.

After submitting, download your submission from Canvas to verify that you’ve uploaded the correct file. Review that any included figures are legible at standard magnification, with text or symbols inside figures at equal or greater size than figure captions. 

This is an individual assignment. All work you submit should be your own. Make sure to cite any sources you reference, and use quotes and in-line citations to mark any direct quotes.

Late work is not accepted without advance agreement via the extension request process except in cases of medical or family emergencies. In the case of such an emergency, please contact the Dean of Students.

Grading Information

Your assignment will be graded on a 10-point scale coinciding with a rubric designed to mirror the question structure. Make sure to answer every question posted by the prompt. Pay special attention to bolded words and question marks in the question text. For further information on how the assignment is graded, see the rubric in Canvas.

Peer Review

After submission, your assignment will be ported to Peer Feedback for review by your classmates. Grading is not the primary function of this peer review process; the primary function is simply to give you the opportunity to read and comment on your classmates’ ideas, and receive additional feedback on your own. All grades will come from the graders alone. See the course participation policy for full details about how points are awarded for completing peer reviews.

Homework 3

Homework 3: What Makes Up an Analogy?

Answer the following prompt in a maximum of 5 pages (excluding references) in JDF format. Any content beyond 5 pages will not be considered for grading. 5 pages is a maximum, not a target; this length is intentionally set expecting that your submission may include diagrams, figures, pictures, etc. These should be incorporated into the body of the paper.  

If you would like to include additional information beyond the page limit, you may include it in clearly-marked appendices. These materials will not be used in grading your assignment, but they may help you get better feedback from your classmates and grader. 

Homework 3 Prompt

Analogies are commonly used in scientific reasoning to leverage existing theories to explore new domains, but they are also quite often used in literature to poetically draw parallels between dramatically different ideas. As we all know, our appreciation for literature is only increased when we painstakingly tear it apart and analyze it like a chemical compound, so let’s do that here—and save pondering how to design an AI agent that can understand the sarcasm of that sentence for another day.

Select an analogy from anywhere in literature. If you have trouble thinking of one, GoodReads has a list of several famous ones that you may choose from.

Once you have selected an analogy, develop models of each of the two separate parts of the analogy (before any transfer has taken place). This could take the form of a frame representation, a mind map, a diagram, a plain text description, or whatever else helps you clearly illustrate the structures of the source and target in the analogy and the similarities between them.

Next, examine what the author intends to transfer from the source in the analogy to the target. Demonstrate your understanding of the concepts of analogical reasoning explained in the lectures.

Then, rewrite the analogy with the same target but using a different source that changes the meaning of the analogy. Discuss how this different source changes the target when its relationships are transferred analogically instead of the original source.

For example, when discussing the emergency transition to online teaching, I often use the analogy: “Asking someone whose training and experience are in face-to-face teaching to suddenly start teaching online is like asking a basketball player to suddenly switch to baseball.” In analyzing this analogy, we would model the source (the basketball player switching to baseball) and the target (the face-to-face teacher switching to online), then discuss what the source adds to our understanding of the target (that while there may be some commonalities in skillset, there are significant differences, and so we shouldn’t expect immediate success). Then, we would write a new analogy, such as: “Asking someone whose training and experience are in face-to-face teaching to suddenly start teaching online is like asking a stage actor to film a movie.” Then, we would evaluate how this new source changes the analogy (for example, that face-to-face teaching and stage acting both feed on the energy of a live audience).

Ensure your submission demonstrates your understanding of the concepts from the lectures and is clear, concise, neat, and legible.

Submission Instructions 

Complete your assignment using JDF format, then save your submission as a PDF. Assignments should be submitted via this Canvas page. You should submit a single PDF for this assignment. This PDF will be ported over to Peer Feedback for peer review by your classmates. If your assignment involves things (like videos, working prototypes, etc.) that cannot be provided in PDF, you should provide them separately (through OneDrive, Google Drive, Dropbox, etc.) and submit a PDF that links to or otherwise describes how to access that material.

After submitting, download your submission from Canvas to verify that you’ve uploaded the correct file. Review that any included figures are legible at standard magnification, with text or symbols inside figures at equal or greater size than figure captions.  

This is an individual assignment. All work you submit should be your own. Make sure to cite any sources you reference, and use quotes and in-line citations to mark any direct quotes. 

Late work is not accepted without advance agreement via the extension request process except in cases of medical or family emergencies. In the case of such an emergency, please contact the Dean of Students.

Grading Information

Your assignment will be graded on a 10-point scale coinciding with a rubric designed to mirror the question structure. Make sure to answer every question posted by the prompt. Pay special attention to bolded words and question marks in the question text. For further information on how the assignment is graded, see the rubric in Canvas.

Peer Review

After submission, your assignment will be ported to Peer Feedback for review by your classmates. Grading is not the primary function of this peer review process; the primary function is simply to give you the opportunity to read and comment on your classmates’ ideas, and receive additional feedback on your own. All grades will come from the graders alone. See the course participation policy for full details about how points are awarded for completing peer reviews.

About the Exams

Canvas by default hides any information about the exams until the exam opens. This page serves to give you an overview of what the exam entails so you can prepare prior to the exam open date.

Exam Coverage

Exam 1 will cover lessons 01 (Introduction to Knowledge-Based AI) through 12 (Logic). Exam 2 will cover lessons 13 (Planning) through 25 (Advanced Topics). There will not be any questions specifically about earlier lessons on Exam 2, although there may be questions that require knowledge from earlier lessons to answer questions about later lessons (for example, you might need to know the operators from the Logic lesson to answer questions on the Planning lesson).

Exam Structure

Each exam will consist of 22 five-answer multi-correct multiple-choice questions. All questions come from the lectures; no readings are tested on the exam.

Each of the five-answer multi-correct multiple-choice questions is eligible for partial credit. You will receive one point for each answer you correctly select, and one point for each answer you correctly leave unselected. For example, imagine the question, “Which of these are planets?” with the options Earth, the Moon, the Sun, Jupiter, and Mercury. If you were to select “Earth”, “the Moon”, and “Jupiter”, you would receive three points: two points for correctly selecting “Earth” and “Jupiter”, and one point for correctly leaving “the Sun” unselected.

Every question has between 1 and 4 correct answers; there is no question where no options are correct or where all options are correct. No credit will be given on any question for which you select no answers or all the answers; this is to prevent blank or completely-filled in submissions aiming for half credit with no subject matter knowledge.

You will have 90 minutes to complete each exam, and you are permitted to use books, notes, or the course video material. You may not interact with anyone during the exam, but you may visit the course forum if you would like as long as you do not post. You may not use any electronic devices during the exam except for the device you are using to complete the exam.

On-Boarding

We proctor our tests with a tool called Honorlock. Honorlock uses your webcam, microphone, and screen capture to observe your test session and ensure you adhere to test policies. Note that Honorlock requires Chrome to run.

Prior to completing a real test, you should first complete the on-boarding process. First, in Canvas, go to Honorlock on the left sidebar, and then select the On-Boarding Quiz. Follow the prompts to enable proctoring, then proceed to complete the on-boarding test. Note that the on-boarding test is equipped with the same settings as the real test, so you can use this to experiment with how you will open notes, view documents, etc. during the real test.

Taking Your Exam

When the exam window opens (at least one week before the exam deadline), go to Canvas and select Honorlock on the left sidebar. This time, select the exam. Follow the prompts to enable proctoring, then continue to take the test.

No room scan is required. The test is open-book, open-note, open-internet: you may consult any materials you want as long as you do not interact live with another human being. This means you may not post on the course forum during the exam, text during the exam, talk on the phone during the exam, or use a separate unproctored device during the exam. Any suspected attempt to gain live support from another person during the exam will result in a student misconduct case, which can lead to a 0 on the exam, sanctions including academic probation, suspension, or dismissal, and prohibition from withdrawing from the class.

That said, remember that Honorlock is not automated proctoring: it is remote asynchronous streamlined proctoring. All violations are flagged and reviewed by the instructor or teaching assistant. In this class, you will not automatically fail your exam because someone came into the room or you had to briefly get up and answer the door. If it is clear that no unauthorized collaboration could be happening (e.g., we can hear the conversations you’re having), then there will be no repercussions.

Grading Information

Each exam is graded out of 110 possible points (22 questions, 5 options each). Your grade and feedback will be returned to you via Canvas. An announcement will be made via the course forum when grades are returned.

Remember that any grade Canvas gives you at the end of the exam isn’t going to be correct. It doesn’t have an option to set up grading exactly like ours, but our approach is a little more generous than it is. (We’re trying to tell it not to give you numbers at all so that it doesn’t scare you, but that setting seems to be unreliable.)

Class Participation

Class Participation

One of the biggest assets of online education is the ability of students to have a larger impact on their classmates’ course experience, sharing their insights, expertise, and ideas. Thus, participation is required as a way of pooling this wonderful resources for everyone’s benefit. However, we understand that requiring participation has a tendency to incite inauthentic participation. Our goal with the participation policy is to give students enough ways to fulfill their participation credit in the way that is most natural and useful to them.

There are a number of different ways to earn participation points. Our goal is for the participation policy to be “invisible” to most students, in that organic participation is sufficient to fulfill these requirements. If you are active on the course forum (e.g. posting high-quality topics or comments a couple times a week) and complete your peer review tasks, you shouldn’t ever need to worry about the participation policy. We expect that for the majority of students, you’ll earn your participation points without really trying.

The following are the ways you may earn participation credit.

  • 0.5 to 1.5 points: Provide a classmate a peer review. You will be assigned three peer reviews per assignment by default, and you may request to give up to three additional peer reviews per assignment. Reviews submitted within 4 days of the original assignment deadline are worth 1.5 points; within 7 days of the original assignment deadline are worth 1.0 points; and beyond 7 days are worth 0.5 points. Note that for all peer reviews, the peer review itself must be useful; simply filling out the form and writing a sentence or two isn’t sufficient to receive credit for peer reviews.
  • 0.1 to 3.0 points: Post a high-quality contribution on the course forum (actual point value varies by post; make sure your Georgia Tech email is attached to your the course forum account). (Maximum 40 points.)
  • 1.0 points: Complete one of the four course surveys (start-of-course, quarter-course, team selection, mid-course, and end-of-course), or other surveys indicated as eligible for participation points on the Quizzes page on Canvas.
  • 0.5 points: Submit a candidate exam question to the form on Canvas. (Maximum 10 points.)
  • 1.0 points: Complete this page’s secret survey before week 2 to indicate you read the entire syllabus; to access it, click the word ‘secret’ earlier in this sentence.
  • Additional points: Additional points may be awarded based on other things that come up (for example, completing a student spotlight or bringing in a guest for a Q&A).

Grading Information

Your number of participation points will be averaged out of 100 possible points and included as 10% of your final average. Earning 100 points or above will give 100% of the possible participation credit. To help you track your progress, we will post participation updates at least three times during the semester (around weeks 6 and 12). Participation grades will be finalized on Friday of Week 12, so you should complete all participation activities by Thursday of Week 12.

It’s intentionally possible to complete your participation credit through peer review and surveys alone (if you request enough extra peer reviews), though the course forum and other mechanisms can certainly supplement that total. It generally would be very difficult to complete your participation credit without completing the majority of your assigned peer reviews, however.

There is no mid-semester deadline on participation; you can take entire weeks off if necessary as long as you earn your points at some point during the semester.

Course FAQ

The following are answers to frequently-asked questions from previous semesters of the course. You’re responsible for knowing any content on this page on the first day of the course; we also may add to this page as the semester goes along, but you aren’t responsible for knowing anything added after day 1.

Will I be penalized for failing to adhere to JDF on my submissions?

Yes and no. The primary purpose of JDF is standardize a document format in a way that lets us give useful expectations about assignment submission lengths that include both text and figures. So, there will be major deductions if you deviate from JDF in a way that breaks that purpose, such as deviations from the prescribed margin size, text size, typeface, and line spacing.

That said, the secondary purpose of JDF is to make your submissions look clean and professional, and to prepare you for the potential world of academic writing where you’re expected to adhere to document formats. So, if there are any cosmetic deviations from JDF that jump out immediately, they may be subject to small deductions. That would include things like: the formatting of section headers, paragraph spacing, and caption formatting.

We won’t be going through your document with a ruler ensuring that your spacing is exactly 1.26 instead of 1.25 or anything like that, though. If deviations can’t be identified during the normal course of viewing the document, you’ll be fine.

If I previously enrolled in this class and withdrew/failed, can I reuse my work?

First and foremost: note that there is no guarantee that the assignment instructions, rubrics, grading standards, etc. have not changed semester to semester. So, even if you are retaking the class, you should not assume that your assignments from the last time you took it still adhere to this semester’s instructions—and even if they do, you should not assume you will receive the same grade. Similarly, if there is any plagiarized content in your assignment, you should not assume it will not be caught this time just because it was not caught last time; our mechanisms and standards for detecting plagiarism have gotten changed over time, and occasionally we catch misconduct the second time around that we overlooked the first time.

All that said: we do not penalize self-plagiarism in this course as a misconduct issue. If you’ve taken this course before and completed assignments, you can resubmit those assignments without worrying about self-plagiarism.

Each semester, TAs receive numerous queries about self-plagiarism, even with this FAQ in place. To clarify, tools like Turnitin will flag resubmitted assignments as plagiarism. This is expected and not a cause for concern. Rest assured, this is just standard procedure, and there’s no cause for alarm. Our TAs manually review such instances to ensure everything is appropriate. Importantly, there is no need to notify the instructional team in advance of using previous semester’s work.

Who grades my assignments?

For each journal assignment, you’re randomly assigned to one of our graders. After they evaluate and input their results, we implement measures to ensure consistent grading across all graders. This process might lead to adjustments in individual grades. Once finalized, grades are then posted. You may see a name listed on the grade you receive, but this is not the sole person who entered your grades; Canvas automatically labels the last person to adjust your grade as the author of your grade, but that adjustment could be anything from grading your entire assignment to fixing a typo in your feedback before release.

The grading team does not consider peer feedback when assessing your assignments. Peer reviews only factor into your participation grades.

For Gradescope submissions, the autograder may execute hidden tests when evaluating your submission (depending on the assignment); however, the score displayed upon completion represents your final grade, provided there are no unresolved plagiarism issues.

The syllabus states that the deadline is 11:59PM UTC-12 on Sundays, but Canvas reflects a later deadline. Which is correct?

We add some extra time in Canvas for two reasons: one to account for daylight savings shifts (since if we went strictly by 11:59PM UTC-12, it would mean deadlines would shift back and forth an hour by most of our time zones) and two to allow a grace period around the submission window in case Canvas is momentarily slow, your internet goes out right at the deadline, etc. Canvas’s deadline is always equal to or later than 11:59PM UTC-12, so as long as you aim for that deadline you’ll be fine; you will not be penalized as long as you submit before Canvas’s deadline, though.

Note that we do not encourage trying to submit right against the deadline; the reason we use UTC-12 as our time zone is to make deadline-tracking simpler. You know that as long as it’s before midnight wherever you are, you’re still eligible to submit.

I feel that my assignment was graded incorrectly. May I request a regrade?

First and foremost: you should take a day to internalize the feedback you received on their assignment, as well as compare your assignment to the exemplary submissions posted on Canvas. They are provided to communicate the bar for exemplary submissions and to support implicit feedback as you compare your submissions to them and find where the exemplary submissions appear stronger. This is an excellent learning opportunity, and we recommend everyone take advantage of it. To truly benefit from this opportunity, we suggest focusing on learning from the strengths of multiple papers rather than fixating on identifying a subpar section in a single paper to use as a basis for arguing against deductions for your own work. It’s important to note that such comparisons would not be valid, as not every section of every outstanding submission received full credit.

Just as dedicating a few minutes to reading documentation (or forum posts) can save hours of code debugging, taking a few minutes to analyze the collective best attributes of exemplary papers can potentially save you hours of writing regrade requests—and help create a blueprint for achieving higher grades.

If even after taking a day to internalize feedback and compare to exemplary submissions you still have follow-up questions, you may post privately to the forum within one week of receiving your assignment grade. In posting, you should make clear whether you are seeking clarification on the feedback you received or are hoping to have your grade actually changed.

Can I form a study group?

Sure, and please do! Just make sure that when it comes time to actually write up code and assignments that you’re doing work individually, of course. Part of our plagiarism-checking workflow checks students’ work against each other, so make sure to collaborate at the level of ideas, not at the level of code or text.

Are there any essential or helpful software installations for which I need to be aware?

Consider using draw.io or lucidcharts (hint: students at Tech get a professional subscription via their GT student account) for drawing graphs/figures. Software requirements, of course, are posted on the project page for this course.

Is there required or recommended reading for this course?

There are no required readings in this course. There are recommended readings and the schedule for them is found here.

There is also optional/recommended reading materials that you could find under Recommended Reading List on the class syllabus page. There is also the KBAI-EBook (compilation of lectures) that you can find under: Canvas -> Files -> KBAI-EBook

Is there a way to use JDF without using LaTex?

Georgia Tech students get free professional license to OverLeaf, an online browser based LaTeX editor: https://www.overleaf.com/edu/gatech

It works much like an online IDE/interpreter and you don’t need to install anything locally—you can just import the JDF LaTeX template and compile directly in the browser window and see the results. There is a GitHub integration that lets you push directly from the page and PDF export as well.

If you don’t want to use LaTeX, there is also a word template (.docx) and a Google Doc template here: https://drive.google.com/drive/folders/1xDYIomn9e9FxbIeFcsclSbXHTtHROD1j

You can export a Word doc to PDF in both MS Word and OpenOffice. Google Doc has export to PDF as well. Our recommendation if you don’t want to use LaTeX is that you make a copy of the template and enter your content directly.

We recommend keeping a copy of whichever template you want to use and simply modifying it directly to create each assignment.

Are forum posts considered course content that should be cited?

If someone points out a resource on the course forum, you don’t need to cite that the forum was the place where you found out the resource exists. If someone’s forum post is actually the source itself, though, you’d be expected to cite that. Otherwise, you should generally cite videos, articles, journals, or other intellectual works.

Can I cite Wikipedia?

Generally speaking, citing Wikipedia for an academic paper is not a good idea, and Wikipedia even agrees. If you are citing work that was original to academic literature, you should reference the original work and use your own prose. After all, Wikipedia is a conglomeration of prose from others’ interpretations of the sources referenced for a given subject matter. It is an abstraction and summary of secondary sources. Those interpretations may be inaccurate and paraphrasing them again in your own words might be a complete deviation from the original work.

It is always a good idea to cite the original work, interpret it yourself, and use your own prose to describe it. If you’re citing Wikipedia because Wikipedia is quoting an original work, then you would still want to cite the original work, which is typically cited at the bottom of the Wikipedia article (and if it isn’t, it’s even less likely that you want to cite the claim as it appears on Wikipedia).

Wikipedia actually has good information on Academic Use.

Can I use AI-based assistance?

We treat AI-based assistance, such as ChatGPT and Github Copilot, the same way we treat collaboration with other people: you are welcome to talk about your ideas and work with other people, both inside and outside the class, as well as with AI-based assistants. However, all work you submit must be your own. You should never include in your assignment anything that was not written directly by you without proper citation (including quotation marks and in-line citation for direct quotes). Including anything you did not write in your assignment without proper citation will be treated as an academic misconduct case.

If you are unsure where the line is between collaborating with AI and copying from AI, we recommend the following heuristics:

  • Never hit “Copy” within your conversation with an AI assistant. You can copy your own work into your conversation, but do not copy anything from the conversation back into your assignment. Instead, use your interaction with the AI assistant as a learning experience, then let your assignment reflect your improved understanding.
  • Do not have your assignment and the AI agent itself open on your device at the same time. Similar to above, use your conversation with the AI as a learning experience, then close the interaction down, open your assignment, and let your assignment reflect your revised knowledge. This heuristic includes avoiding using AI assistants that are directly integrated into your composition environment: just as you should not let a classmate write content or code directly into your submission, so also you should avoid using tools that directly add content to your submission.

Deviating from these heuristics does not automatically qualify as academic misconduct; however, following these heuristics essentially guarantees your collaboration will not cross the line into misconduct.

Should I cite sources on [assignment]?

There are two answers to this. One: the expectation in this class is that you’ll cite sources on some assignments, not others. For example, the questions on GDPR in HW1 likely require citations. On the Projects, it is highly likely that you’ll cite sources, but I wouldn’t go so far as to say everyone must to complete the assignment.

Two: to gauge whether you should cite a particular source, there are a number of times when you should always cite a source, both in-line and in your references section. I’ll use this paper as an example.

First, most obviously, if you’re writing about a paper, you would cite it:

One paper in this field looked at the interactions between motivation and student demographics among TA applicants (Joyner 2017).

If you are directly quoting or near-paraphrase another source, you should always cite in-line. If you are directly quoting, you would put quotation marks around the quoted material as well. For example:

Joyner writes that “scaling expert feedback while preserving affordability is possible.” (Joyner 2017)

When you are providing the source for an objective fact that is not common knowledge and that you did not discover yourself, you would cite in-line as well. For example, you would cite the following statement, as it is not common knowledge nor discovered by you:

58% of online TAs cite intrinsic motivations for wanting to be teaching assistants (Joyner 2017).

You do not need to cite common knowledge. For example, you would not do this:

The earth goes around the sun (Copernicus 1514).

Finally, if you are summarizing or using as foundation the higher-level ideas, methods, or structure of another source, you would cite that. This is a little fuzzier to describe, but you’ll probably know when you’ll use it. These are times when you want the reader to know there is precedent for your ideas, methods, or structure. For example:

One key challenge with scaling online education is keeping access to expert feedback in larger class sizes (Joyner 2017).

Regardless, for all of these examples, you would have the full citation at the bottom of the paper:

Joyner, D. A. (2017). Scaling Expert Feedback: Two Case Studies. In Proceedings of the Fourth Annual ACM Conference on Learning at Scale. Cambridge, Massachusetts. ACM Press.

For more, check out Yale University’s excellent Warning: When You Must Cite.

What libraries can I use on the projects?

The permitted libraries for this term’s project are:

  • The Python image processing library Pillow (version 10.0.0). For installation instructions on Pillow, see this pageLinks to an external site..
  • The Numpy library (1.25.2 at time of writing). For installation instructions on numpy, see this page.
  • OpenCV (4.6.0.66, opencv-contrib-python-headless). For installation instructions, see this page.

Additionally, we use Python 3.10.12 for our autograder (at time of writing).