This post is superseded by a more exhaustive “I’m now Stanford Official” which you should read instead. This post was just a short update halfway through the program.

One might notice that the posts in my blog kind of follow the school schedule. Last time I posted around Christmas, and this post was written at the end of March. Well, that’s because I am on a spring break.

I’ve enrolled in a Stanford Graduate education program on Artificial Intelligence through SCPD, Stanford Center for Professional Development. As part of the program, you take the same courses as Stanford’s CS students do and–if you get good grades–you earn a certificate, which is not quite a degree but still something you can frema. You might also keep it rolling until you get a Master’s; many find it hard to resist.

The coolest part of this is that you get to learn from the actual pros who’s advancing the field. I took classes taught by Percy Liang and Christopher Manning. Yesterday we were just reading their papers, and now look, I can see them teach a class! Make sure to check the course catalog if the big names are what you’re after here. Also, some report that in some courses the head instructor doesn’t really appear much anymore, and gives one or two presentations over the video.

It takes a lot of hard work. The assignments take surprisingly long time to complete, 10-20 hours a week; either they are really hard, or I’m a bit slow (or both). But gosh is it worth it. I’m impressed by the quality of American education (this is my first experience at an American university). Teaching Assistants challenge and push you a lot, and thus help you learn.

Add course projects and an occasional midterm here and there, and so you have a second full-time job.

So far I’ve completed two courses:

  • CS224n: Natural Language Understanding (NLP) with Deep learning (RNNs / CNNs, word vectors, sentence parsing);
  • CS221: a course on a wide range of AI topics CS221 featuring Search, Reinforcement Learning, Markov Decision processes, and other basic tools of AI. The “overview” nature of the course is a trap: I still had to pass a midterm that requires deep understanding of all of these.

I now have a divided impression about AI and Machine Learning in general. On one hand, Machine Learning has long history and deep roots in logic and statistics. That’s something you can’t get from the hyped up articles in the media but only through studies. These neural networks and their architectures are actually closely tied with “non-neural” function optimization. On the other hand, there is a lot of stuff I can’t quite yet grasp: why do these networks work so well? What are their limits?

I’ll get a better feel of it as It’s still not over: I’ve just completed my 2nd course out of 4, and I’ll soon embark on learning about Computer Vision. However, the education and inspiration I received even without the paperwork to boast about is priceless.

If you do choose to enroll, sign up early. The courses fill up the week the enrollment opens – see the dates about “open for course enrollment” in the Academic Calendar, and the quarters in which the courses are “offered” in the course catalog

But if getting degrees is not your thing, many courses are available online, e.g. here’s the cs224n lectures

Meanwhile, stay tuned for a couple of posts I’ve written over the spring break: the completion of the Highway series, and some experiments with hardware,