If you're not interested in this topic, you can stop reading.
[ref:
http://www.ml-class.org/course/qna/view?id=58]
You can easily verify whether they are identical or not by comparing
their syllabuses ( This course (
http://www.ml-class.org/course/
resources/index?page=course-info) and cs229 (
http://cs229.stanford.edu/
schedule.html) ).
The whole part about reinforcement learning is excluded from this
course (20% of lectures). Also are excluded:
* Gaussian discriminant analysis. (Naive Bayes just in quick survey)
* GLM.
* VC dimension.
* EM. Mixture of Gaussians.
The only theme added is Neural Networks. So this course is about 30%
shorter than the cs229.
What about the complexity of what's left, you can make comparison the
same way:
1. Listen to 2008 lectures on YouTube and compare to current (or read
lecture notes (
http://cs229.stanford.edu/materials.html) )
2. Compare this course tasks (review questions and programming task,
when they'll be available) with problem sets from cs229 page. There
are currently no links to problem sets (cause, I suppose, they don't
want to show it to new Stanford students beforehand). But you can use
the direct link like this (
http://cs229.stanford.edu/ps/ps1.pdf).
As you can see there are a lot of tasks, where student should prove
something. And here we don't have those tasks at all. Lectures (that
were shown up until now) is also, to my opinion, easier in comparison
to 2008 stanford records.
So I can estimate that this course is about 2-2.5 easier than the
cs229. Don't know why this decision (to simplify the course) were
made. It will be interesting to hear the answer from professor Andrew
or others who work on this project.
Posted by: Zarutskiy Svyatoslav (+225)