You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on May 16, 2024. It is now read-only.
This is my second year in the OMS program and CS6601 proved to be way more serious than the previous ones that I had taken before. The course covered a larger number of topics than the previous ones, the projects and content being more difficult. The students definitely aged a bit at the end of the term and a student even claimed that he started having delusional Piazza/Canvas notifications about the grades in his dreams. On the other hand, finishing this course also was more satisfactory given the number of hours we spent each week.
About the course
The course is a survey of AI algorithms/paradigms and closely follows Artificial Intelligence: A Modern Approach, by Stuart Russel and Peter Norvig book. It starts with discussing the technique involved in developing game AI called Adversarial Search which includes algorithms such as Minimax, Expectimax, Iterative Deepening & Alpha-beta pruning. This topic gets a companion project where we had to implement these techniques to develop an agent that can play a board game called Isolation. Following this, a survey of various Search algorithms used for Planning such as Breadth First Search, Depth First Search, A*, etc. were introduced. The second project involved implementing these algorithms along with extending these techniques to cases such as bi-directional and tri-directional searches for effective planning.
Next were lessons on Constraint Satisfaction Problems(CSPs) and Simulated Annealing techniques. This was followed by lectures as well as a project on Bayes Networks. This introduced techniques such as d-separation, variable elimination, etc. for probabilistic inference. A very brief introduction to different Machine Learning algorithms was given and a subsequent project to implement a few variants of Decision Trees. Another project related to Machine Learning was to implement Image Segmentation using Gaussian Mixture Models. We were also introduced to Hidden Markov Models and had to implement a simple sign language recognition model as a project. At the end, there was a brief discussion about Propsitional & First-order Logic, followed by an introduction to Markov Decision Processes.
There were six projects in total from which one project with the least grade was dropped for grades. The projects summed up to 60% of the total grade. Most of the students found the first two projects very hard and one had to spend 20-30 hours on the projects in general. Two take home, open book exams fill up the rest of the grade distribution. The exams were long 50+ page booklets and involved coding up and solving problems from the above topics.
Conclusion
This class was a hard, but satisfactory course. As with any other course, starting the projects as early as possible is the key. Starting early in this course is even more crucial given the insanely long number of hours each project requires, especially if one works full time.
The text was updated successfully, but these errors were encountered:
CS 6601 Artificial Intelligence
Instructor: Thad Starner
Course Page: Link
This is my second year in the OMS program and CS6601 proved to be way more serious than the previous ones that I had taken before. The course covered a larger number of topics than the previous ones, the projects and content being more difficult. The students definitely aged a bit at the end of the term and a student even claimed that he started having delusional Piazza/Canvas notifications about the grades in his dreams. On the other hand, finishing this course also was more satisfactory given the number of hours we spent each week.
About the course
The course is a survey of AI algorithms/paradigms and closely follows Artificial Intelligence: A Modern Approach, by Stuart Russel and Peter Norvig book. It starts with discussing the technique involved in developing game AI called Adversarial Search which includes algorithms such as Minimax, Expectimax, Iterative Deepening & Alpha-beta pruning. This topic gets a companion project where we had to implement these techniques to develop an agent that can play a board game called Isolation. Following this, a survey of various Search algorithms used for Planning such as Breadth First Search, Depth First Search, A*, etc. were introduced. The second project involved implementing these algorithms along with extending these techniques to cases such as bi-directional and tri-directional searches for effective planning.
Next were lessons on Constraint Satisfaction Problems(CSPs) and Simulated Annealing techniques. This was followed by lectures as well as a project on Bayes Networks. This introduced techniques such as d-separation, variable elimination, etc. for probabilistic inference. A very brief introduction to different Machine Learning algorithms was given and a subsequent project to implement a few variants of Decision Trees. Another project related to Machine Learning was to implement Image Segmentation using Gaussian Mixture Models. We were also introduced to Hidden Markov Models and had to implement a simple sign language recognition model as a project. At the end, there was a brief discussion about Propsitional & First-order Logic, followed by an introduction to Markov Decision Processes.
There were six projects in total from which one project with the least grade was dropped for grades. The projects summed up to 60% of the total grade. Most of the students found the first two projects very hard and one had to spend 20-30 hours on the projects in general. Two take home, open book exams fill up the rest of the grade distribution. The exams were long 50+ page booklets and involved coding up and solving problems from the above topics.
Lecture Content
The lectures tend to overlook at places without going deeper into the material. Apart from the Russel & Norvig book, I found lectures from MIT AI - Patrick Winston, Stanford AI - Liang & Sadigh, UC Berkeley AI - Peter Abeel and other resources very helpful to fill in the gaps. Few videos from the channel mathematicalmonk also helped for certain topics.
Conclusion
This class was a hard, but satisfactory course. As with any other course, starting the projects as early as possible is the key. Starting early in this course is even more crucial given the insanely long number of hours each project requires, especially if one works full time.
The text was updated successfully, but these errors were encountered: