I'm now Stanford-Certified with a Non-Degree Certificate

Would you like to dig deep into Deep Learning, but don't know where to start?

Do you want to work on Artificial Intelligence, but fear you don't have enough intelligence?

Trying to remember what Stochastic Gradient Descent is, but can't because you've graduated long time ago?

And finally, can you only find motivation for Learning when you're being Supervised?

Rejoice! You can get this all and more from a certain service... It's called Grad School and it's available for hire.

A year and a half ago, I enrolled in a CS-related graduate program for professionals at Stanford, and have finally completed it in December. Here's my new shiny certificate. In total, I took 4 courses, and it took me 5 quarters to complete (I took a break in the summer.) In this post I'll describe this program in case you're interested.

What is SCPD?

SCPD stands for Stanford Center for Professional Development. Here's their website. They offer courses in a variety of topics, from Foundations of Computer Science, to Cybersecurity Strategy. I’ve enrolled the the Artificial Intelligence program that featured courses from the Computer Science department.

In this and most other SCPD programs, you just take classes with other Stanford students: you listen to the lectures (online), submit the assignments (written solutions to the math problems or computer code), get grades, take exams and midterms, and discuss your problems with the Teaching Assistants.

Each course takes a quarter to complete. I completed 4 courses over 5 quarters because I wanted a summer break. I'm sure there's a total time limit, but I can't find any reference to it now.

Usually, each program offers multiple courses, and you need to complete a certain set of courses. As an example, for the AI certificate you need to complete 4 out of 14 offered through the AI certificate: one mandatory (CS221 Artificial Intelligence: Principles and Techniques) and three elective courses. The set of courses offered is updated to keep up with the times.

You don't have to complete the certificate program; in fact you can just take the classes you're interested in and take it easier with the grade and the assignments. You would still need to go through the similar enrollment steps, but the expectations might probably be lighter.

If you finish the program, you get a piece of paper like the one above. And if you list it on LinkedIn, you'll have a Stanford icon next to your profile ;-)

What to expect?

Expect a lot of hard work if you're actually going for the certificate (if you are just taking some courses, then pick your own expectations). The requirements listed on the certificate page are strict, so please review them.

  • 10 hours of work a week + several weekends spent on the final project, and that's if you take one course at a time. You are required to take couses for 4 units (1 unit = 3 hours a week, so 4 units is 12 hours a week), and ship a course project every time. This is consistent with my experience, and this is serious workload. Consider this:

    • For each course taken, I had to work through 3-4 weekends on the final project. Do you like to spend an occasional Saturday afternoon chilling at the park? Forget it--unless you'd bring an ML book with you.

    • I had to put my hobby (bike racing) on hold for the entire 2019.
    • My performance at work decreased. I had very little creativity left and noticed I often just went with the flow instead of being the creative troublemaker I normally was. And this creativity is part of your job description and expectations if you're at a Senior level or above. I definitely missed out on some opportunities because I was tired.
  • You need to earn at least a B grade each time if you do want to earn the Certificate This is hard, and that actually means you need to submit most of the assignments on time, and be proficient enough in math to do well on most midterms.

    • Expect regular cadence. Typically, you need to submit an assigmnent every 1-2 weeks.
    • The school doesn't stop when you go on vacation. Rescheduling because you have a vacation is not explicitly allowed (although I never tried to negotiate it). I found myself finishing up my final project at a ski trip... twice.
    • If earning the certificate is not your goal, and you're "just taking classes", you might think that earning a grade is not among your goals. However, in most classes it's pretty hard to not earn at least a B if you've actually studied and understood most of the material. Consider doing the free alternatives in this case.
  • Each course costs $5,000 in the AI program, and you need to take 4 of them. The tuition for other programs is different: it can be higher for Enterpreneurship certificates and lower for Foundations of CS or somesuch.

    • Pro tip: are you employed by a large company who can cover a part or the entirety of your tuition for work-related education? Use that option if you can.
  • The courses are not easy. Typically, many courses you take in college will be easy and some hard. But in this certificate, most courses are "hard".

That sounds hard. Why not just learn about Machine Learning online?

There are so many guides, there's Towards Data Science, there are books, and free videos online... But there are things you can only get through supervised education at a known institution.

  1. Personalized feedback. Go talk to the TAs (Teaching Assistant, aka семинаристы in Russian) about your homework and about the projects. There are Zoom videoconferencing available to remote students (but I mostly just drove to Stanford for that). If you are struggling with something but decide to skip on this, you're not getting the most out of your $5k: go talk to a TA.
  2. Experienced advisors. For project work, pick the more senior TAs and talk to them. Find excuses to talk to them (a good excuse can be "my project fits with your research interests", "one of your papers seemed relevant to my project so I wanted to discuss it with you"). Your project is not relevant? Consider doing a project that is--I'm serious here.
  3. Accountability. If you lack inner drive, enrolling won't help you start, but for many people, accountability helps to be consistent. Due to the difficulty of the courses offered, sometimes you will feel drained and demotivated, and knowing that you're accountable to your school and to the TAs who helped you might just give you that last bit of motivation.

How to enroll?

So, the enrollment is open now for the Spring Quarter. What do you need to do?

  • First, the enrollment is time-sensitive. The available couses (just like the school itself) are in high-demand. They spots usually sell out in the first days or 1-2 weeks tops after the enrollment opens. You want to put a reminder on your phone and enroll as soon as it's available. Find the enrollment dates in the Academic Calendar for the right year (e.g. here's the Calendar 2019-20) under something like "Axess opens for course enrollment". Then, enroll via the SCPD website, not through Axess!

  • You need to enroll into the program itself in addition to enrolling into the specific courses (because the AI certificate is "in high demand", duh). This was the active link with more information at the time of writing. Get the required documents ready ahead of time. Find the list through the SCPD website; the list will include your college transcript in English. I used the translation from Russian that was prepared for one of the visas.

  • You need to nominate an "exam monitor." Usually, this should be your manager or a person who can pretend to be one. Their role would be to supervise your exams and make sure you don't cheat and abide by the time limit. So make sure to pick a busy manager whose calendar is full of meetings and who won't have the time to care. ;-) And while I did nominate the exam monitor, I took all tests on campus anyway.

  • You need to design your program to minimize the completion time. For example, CS224N is offered once per year, whereas CS231n is offered three times per year. If you're interested in both, it might make sense to take CS224N when you have a chance to, and take CS231N some other time despite that CS231n is a great starter course. (Actually, in one of the cs221 assignments, you'll wirte a constraint solver to design your program in this way for you automatically :-D).

    See the section below on which courses I took and which I decided not to take and why.

Why do you want it?

What's the goal you have in mind? Different people enroll for different reasons:

  1. Change the career path.

    The credit earned in this courses counts towards a Master's degree should you choose to pursue one and should you get accepted. One former colleague of mine did that. Now she works as a Natural Language Processing ML engineer for Google Ads.

    But she is actually an outlier! You'll be surprised though that the hiring managers in Silicon Valley do not consider such a certificate a big advantage, especially if they're looking for people with production experience in machine learning. Think about it: many Stanford CS students take the same courses as the ones offered in the certificate, but not all of them would be qualified for a specialized ML role. (This applies to Silicon Valley though; perhaps other places are not crowded with recent Stanford graduates and this certificate will mean something).

    To this point, the other two colleagues with the same ML certificates and Master's from Stanford are working on ML infrastructure without any modeling in their duties. While I'm sure it helps them in their jobs (and managers are quite willing to consider such a certificate a big plus), one doesn't really need a degree to do that.

    In other words, this degree is not a guarantee, but a step towards a different career. E.g. you could use what you learned to start an ML-related side project at your current job. See this classic video titled "You and your Research" but it should've been titled "You and your career".

    This is a well-trodden albeit a risky path. That colleague I mentioned at the beginning had transferred to her ML modeling role from the ML infrastructure position shortly after getting her stamped degree. Heck, I transferred from ML infra myself. But it's not the degree that gets you there. It's your grit and the willingness to go the extra mile; education merely gives you the tools.

  2. I just need a change. However hard it was to admit, this was a big part of my motivation. Enroll first, and then see where this takes me later. Do on your own risk, and only if you're a person who lives by this principle in other aspects of your life as well.

  3. Gain deeper understanding of the topics. Have you been doing "classical ML" and want to learn more about deep learning? Or say, you've recently joined a team that is involved in Natural Language Processing, but this topic is new to you and you need to bring yourself up to speed. Then these courses could be a great tool; just keep in mind that some courses do offer deeper understanding but some do not. In this case, consider also just taking 1-2 classes without completing the Certificate program.

You'll have to use every free minute to study. Here's me doing homework on the Google bus.

The Wrong Reasons

However, there are things that these courses won't help you with.

  1. Learning about the cutting-edge techniques actually employed in the industry. No course geared towards undergraduate students will tell you that. Many of the TAs don't know it either. You can get this knowledge on the job, or learn it from the engineering community. Undergraduate-level courses are the wrong tool for that.

  2. Getting an equivalent of N years of experience on your resume No, you won't be able to pretend that you're a senior, experienced ML developer after just a total of 2-3 months of personal projects of undergrad student quality. I tried and it didn't work. That's less than one internship worth of workload.

  3. Just to have fun. School is not fun. You're a professional now. Leave the all-nighter drinking parties you had in your teens to the confines of your memories. At top schools, "3 units of credit" actually means "9 hours of coursework a week". You won't get your youth back by enrolling into a school.

  4. I need accountability to actually get something done. So... about that. If you don't have an intrinsic motivation to get the studies finished, you won't make it. It's like thinking you'll go to the gym more if you buy the membership: you won't and $5000 is a lot of money.

How to try before you buy?

If paying money for things or spending $5k without prior research is not your thing, you can get a lot of that education for free or at least try the course before you buy it.

The algorithm is simple:

  1. Find the course you're interested in in the Course Catalog.

  2. Google "Stanford <course name>". The first lnk will be the course website, wide open to the public. Read the lecture notes. Download and try to solve the assignments.

It's insane how much of this stuff is available for free. Like here are the full lecture notes for CS231n. There are lecture notes for other classses too. Lecture notes for CS229 is a book's worth of foundational information for "classical ML".

Besides, a simplified version of CS229 is offered on Coursera. Homework is not required.

And of course, besides Stanford, there are other resources where you can learn about ML. Consider the Google Course course. It's much more practice-oriented, and you'll avoid doing all the math. If you take it, and feel like you creave more math and are willing to learn these concepts on a deeper level, then proceed to some more advanced programs. If not, proceed to hacking on cool ML-y things.

Go Local

Now, I know, it's hard to beat a certificate from Stanford. Few schools have the credentials, the reputation, the resources, and the professors that Stanford offers. If you have never been to the Stanford Campus and you live close by, you gotta check it out! It even has a Rodin Sculpture Garden.

However, if you live, say, close to another good school that offers a similar program, consider enrolling into that school instead. There are certain benefits in face-to-face communication with TAs and other remote-ish students.

Yes, the SCPD policy discourages you from attending the lectures and tests on campus. While you "are not guaranteed a spot" in the exam auditoriums, this never was a problem for me. I just showed up and always found a spot. The TAs and professors were always happy to talk in person rather than mess with Zoom or other remote conferencing. Although, I once drove to the campus only to find that my TA is home, and is doing videoconferencing via Zoom.

Besides, some Poster Sessions where you display and present your project with a poster are essentially giant career fairs. Especially the CS231n "Convolutional Neural Networks" poster session was one: Waymo, Zoox, Cruise, and a bunch of other companies deployed their recruiters there.

The same can be said about other industry events too: CVPR has reportedly become not just a career fair but a full-fledged industrial ML expo in addition to being a kinda scientific conference.

Stuff you get as a Stanford non-student

No, you can't get a Student ID and so you won't be eligible for discounts. You will, however, receive access to the digital library, and you'll be able to download all the papers needed for your work from ACM, IEEE, etc and other websites for free.

And you will get the @stanford.edu email for the duration of your studies. Make use of it while you can.

Just get a GPU. It'll be easier than messing with remote access to the free cloud compute.

Since Machine Learning requires a lot of GPU time, you'll get credits on Microsoft Azure or Google Cloud to run GPU-intensive computations for class assignments and projects. But if you have a gaming computer at home, you don't need to do that. Last year, while I did buiy a new GPU, I was able to do away with my old GeForce GTX 1070 for the most part, which you can buy new for \$300 (it aged well, as far as machine learning goes). Remember, you've shelled out \$20,000 for tuition, so you can definitely spare \$300 for the GPU.

Even if you don't have a gaming PC and only have a non-gaming laptop, consider making an External GPU setup like the one I described a year ago in the blog.

Tips and tricks

  • There are social mixers for local SCPD students. Join them.
  • After your enrollment finishes... nobody will tell you what to do next. I had to ask my friends on where to find the schedule. Look into your Stanford email account for instructions, and if that fails, just Google your course name. The webpage will show the course schedule, and you can watch the lectures remotely via Panopto video delivery website.
  • If you're at a big company, or are a member of a local meetup, find friends to do coursework together. I partnered with two other Googlers for the CS224n course project.
  • On the second or third week, start thinking about your final project. It's not due for another 2-3 months, but it will take a long time. Discuss your project with TAs; it's a better use of their and your time than trying to solve that one homework problem.

Which Courses I Took (and which projects I've completed)

Out of the AI certificate courses, here's what I've taken (in the order they might be interesting to you), with a more in-depth description to follow.

CS231n Convolutional Neural Networks for Visual Recognition

CS231n Convolutional Neural Networks for Visual Recognition is the hands-down best course on Deep Learning and neural networks I have taken. If you're interested in practical deep learning, take this course first. At the time of publishing this blog post, the enrollment for this course is open, so don't miss your chance. You'll not only learn how to work on computer vision, but you'll learn the foundations of deep learning, starting from manually deriving and implementing gradients and backpropagation, culminating in applying tried and true architectures to foundational computer vision tasks (including "nitty-gritty" details like building activaiton maps).

While 5 years ago this course would be state-of-art, it seemed a little bit dated to me. I wouldn't take it for the cutting-edge updates on the topic, but more for the foundations.

  • My Project: A Battery-Powered Friend: Towards Embedded Emotion Recognition" (poster, report). This is my most successful project here; I extensively described it in my blog, and I won't repeat myself, but I've literally built a Raspberry Pi device that ran a CNN on device and "smiled at you" with its green LEDs if it detected a smile with its camera. The project included hiring MTurk workers for image labeling, hacking on Raspberry Pi and deep learning of course, which made it a really fun experience.

  • Note: Fei-Fei Li only gave 2 lectures when I was taking this course; most of the instruction was done by the head TAs. That is not to say the quality of this course is subpar: the instructors are experienced and motivated; Justing is a great teacher. I just don't want you to have some sort of expectation there.

  • Poster session for CS231n was essentially a career fair this year. Come in person if you can.

CS221 Artificial Intelligence: Principles and Techniques

CS221 Artificial Intelligence: Principles and Techniques is the required course that covers a wide range of AI topics featuring approximated graph Search algorithms (A-star), Reinforcement Learning (with the fun Pac-Man challenge I got 4th place in), Markov Decision processes, Constraint Satisfaction solvers, and other basic tools of "classical" AI without a lot of ML per se. If you think this course is easy because it's an “overview” course, don't be fooled. You are expected to dig deep and demonstrate understanding of all of them on the 2 midterms and across 8 assignments.

The course was taugt by Percy Liang; he's a great instructor and researcher; in fact our project for CS224n (see below) was an extension of his paper. :-)

Algorithm performance evaluation using... ahem, volunteers 😇

  • My project: "Using Weather Data for Wine Recommendations" (poster, final paper). It was my first project so the quality of the report is lacking, and the weather data didn't bring the results I was hoping for. However, unsupervised feature learning by Latent Ditichlet Allocation and clustering totally worked! And it did produce better wine recommendation than the store experts, as evidenced by a blind tasting we conducted with our friends.

Leaderboard for Pac-Man competition featuring my submission 5 days before the deadline... :-( If you get a large score, do not reveal it!

  • The course features the famous Pac-Man challenge as part of which the students teach their AI play Pac-Man. Pro tip: if you want to win the challenge, do not submit early. Otherwise the other students will get motivated by your scores and will try harder. :-(

CS224n Natural Language Processing with Deep Learning

CS224n Natural Language Processing with Deep Learning. I've learned about Recurrent Neural Networks, LSTM, word vectors, and neural language models. In two of the assignments, you'll implement portions of the Neural Machine Translations and train an encoder-decoder network to translate English into Spanish. You'll do it once with word embeddings, and the other time with CNNs. This is insane!! These were the world's most mind-blowing achievements 10 years ago and now you can do it with your own bare hands (and a GPU or two).

This is a pretty new course, and it did represent the state of art in NLP when I was taking it a year ago. If you're interested in NLP, this course is a must-have. Out of all courses, this was the best taught, most modern, and most memorable to me.

  • Our Project: "Digital Mad Men: Training RNN as an Ad Copywriter" (poster, report). We did a group project with Sri and Lin, two other Googlers who were participating in the program. We applied some new research from Stanford NLP lab on variational language models to try and get the neural network to write positive product reviews by inserting product names into the existing reviews. We couldn't get the network to insert the keywords, sadly, but we learned a lot in the process.

  • Pro tip: find your peers who are doing the same course on internal Slack / Mailing list, and band with them for the final project. It's a bit difficult to make people with busy scheduled collaborate, but it's worth it.

  • This course was taught by Chris Manning (60%) and Abigail See (40%); both are actively working in the field and are great instructors. I'm sad Abigail doesn't seem to be teaching this course this year.

AA228 Decision-Making under Uncertainty

AA228 Decision-Making under Uncertainty. This is a combined CS/Aeronautics course that covers modeling and working with uncertainties in decision-making and planning. It's quite a nieche course unlike the ones listed above and almost exclusively focuses on planning, Q-learning / SARSA, Monte-Carlo tree search methods, and partially observable Markov Decision Processes.

The most useful outcome for me was that I finally understood how to use Bayesian inference in practical contexts. This course starts with foundations, and the instructor takes the approach of "unless you truly understand the foundations, we might as well not even talk about POMDPs". I finally couls actually work with conditional probabilities, and I am now able to contribute to modeling the mission objectives at work.

  • My Project: "Self-driving under uncertainty" (report; poster not required). I applied Monte-Carlo tree search for decision-making given imperfect perception. The project didn't work out in a variety of ways: essentially I learned why MCTS and other such methods are not yet widely used in the industry. The POMCPOW planner I implemented was myopic and required handcrafted features to even dirve forward and not stall in place.

    Interestingly, there was a similar project in CS221 we've done as part of one of the assignments. The methods used there worked much better for a similar formulation.

  • Pro Tip: this course is easier and less intense than the other courses.

Which Courses I Passed On

  • CS230 Deep Learning was just... not deep enough, according to my friends that study it. Other courses will give you deep practical knowledge of some specific area as well as this introduction to Deep Learning.
  • CS229 Machine Learning was mostly about the "classical" Machine Learning, with a lot of math you'll need to solve in the assignments and not a lot of practical knowledge. Instead, I just read the lecture notes I downloaded from the internets at my own pace in the summer.

My personal Learning Outcomes

So what did I get out of it?

  • I can have a conversation about deep learning with other people. Heck, I can make jokes about ML! (Next milestone: make jokes people laugh at).
  • I can read, understand, and pass small amounts of judgement about the relevant ML papers.
  • I revived the dormant memories of linear algebra, statistics, and calculus.
  • I learned that 80% of Machine Learning is indeed downloading datasets, generating features, writing evaluation metrics, and trying to find labeled data relevant to your project.
  • ...and finally, I did achieve my goal of changing the career path. I work on self-driving cars at Zoox now.

Good luck.

The Blog Turns... 10 Years!

Today is 10 years since I wrote the first post for my blog. Ten. Freaking. Years. It's almost hard to believe that I had something remotely meaningful happen in my life 10 years ago.

So.. uhm, what else do I say? When I started this blog, I had still been living in Russia; writing about programming in English was a strange choice many questioned. I'm glad that I started it that way. Gosh was my English bad back then, but at least I got some practice in.

This proved helpful but not for language acquisition really. This blog helped me to become a better writer. There is a lot of written communication required of a high-level software engineer in Silicon Valley. When friends ask me what [programming] language I use at work, I usually reply "English". It's true! There were periods when I churned out a new 'design doc' every week at Google, and these periods turned crucial in my career. If for nothing else, this was the main thing this blog helped me with.

Greatest Hits

Maybe I should go over the greatest hits? According to the Google Analytics, my most popular pages over the last year were:

  1. How to Get Binary Search Right with basically job interview advice. But I cheated here a bit. See, when I joined Google Ads, I got the ability to use one dollar a day on Google Ads traffic.

    Here's the demonstration of what $1/day spent on Google Ads can do for your content.

    Without the ad spend, the post is getting almost no clicks. I still think it's a good post but alas.

  2. Easy parallelization with Bash in Linux with my favorite seq 10 | xargs --num-procs 5 Bash trick.

  3. Limiting time and memory consumption of a program in Linux, my most successful open-source project that emulated cgroups before they were stable.

  4. Take Apart that Giant Case: A Compact, 1.5kg Thunderbolt3 eGPU Setup detailing my experience with tearing down and rebuilding an eGPU case.

  5. Busting C++ myths: virtual function calls are slow showing how virtual functions work in C++ and why one shouldn't fear them (I'm afraid some of my claims there were unfounded, but so are the fears of virtual function calls).

  6. SF Muni LED Sign at Home with Raspberry Pi, my most famous project that featured on Reddit and at a bunch of makers's websites.

  7. Consistency Models Explained Briefly, where I filled the gaps in Google search and detailed what Quiescent Consistency means. This page is linked from some of the lecture slides in advanced CS courses; a lot of students visit it, and I'm truly honored.

And a lot of other things. I really like my attemtps at short witty stories like Zeno's Flat Screen TV Paradox, or "Lesson III", or "A dying Tux in an Airbus", which sometimes get featured on Reddit. But nobody reads those. Most people who google are looking for answers, and they're finding them on the "how-to" pages like the "AWS Beanstalk Speed Run".

What's next

If I were to truly chase popularity, writing a blog is no longer the way to do it. I should make a youtube show or record a podcast. That's just how content is consumed now. The closest I got to youtube was this video I recorded for one of my blog posts. Truth be told, I actually wrote that blog post not for the readers of the blog (I have very few anyway). Primarily, I made it as an illustration to an internal Google user study ran by the Maps team when they were considering taking away the ability to drag the point around with the mouse.

That could be promising though. There are successful shows that would've been blogs 10 years ago, take "Two Minute Papers" for example. But I'm not here for clicks.

This blog is primarily a way for me to organize my thoughts and to just... let it out. This blog is also an interesting recap on my career progress. Let's plot the number of posts over the year.

At the beginning, I was just letting it all out. Everything was new and exciting. A new merge sort algorithm--blogged! A new C++11 feature--blogged. Someone aligned the comment to the right, like in the math books--blogged. I faled a job interview--blogged. In that last post I also said I'd never work for Google, and there I was, working there for almost 7 years, ha.

Back then, everything was new and exciting. And then it tapered off. I made 0 blog posts in the whole year of 2015. Arguably, 2015 was the most intense year of my life; while I could say I had other things going on, I did get these things done! Why didn't I blog?

The answer was simple although I had to search hard for it. Software was no longer new and exciting. I looked at this (and at a bunch of other things) and I realized I'm stagnating. I decided to change my career. It tooke me two years, but instead of your run-off-the-mill mid level tech lead, I'm a machine learning engineer now. I'm working on self-driving cars, building models of traffic, making ML projects with Raspberry Pi, and posting solutions to little math problems.

The software is exciting again. I feel young. And I'm anxious to tell the world all about it.

P.S. To all the readers who have been reading my musing, and who did come here through an RSS feed: thank you. I'm grateful that you're reading these posts, and I only wish you commented more. Or just shoot me an email, like some of you did. E.g. Martin who helped me fix errors in the post about Binary Search--Martin, I remember your name 6 years later; huge kudos to you and to others.

Lessons from a Smiling Bot, III: Customers don't Care it's Hard

Other lessons from a Smiling Bot

The lessons I've learned when building a Smiling Bot: an embedded CNN model for the deep learning for Visual Recognition course at Stanford.

There was this woman I really liked, and I needed to show off in some way. How do you impress a girl? Of course you show her your machine learning projects (gotta rock what you have 🤷). I no longer have the Muni Sign on my wall though--that's been a reliable magnet for ladies--so I showed her my Smiling Robot.

She was impressed...not. Here are her exact words after she saw my Raspberry Pi correctly read a smile off of my face:

That's nice... It's slow though.

What?! Two months of hard work on top of ~10 years of CS education, several sleepless nights, modern groundbreaking research packed into a Raspberry Pi, a poster with Arnie, and all I got was...

It's slow.

Yeah, sorry girl (not sorry)! I've only built a very optimized CNN using neural architecture search, but the non-neural Viola-Jones boosting-based face detector I've downloaded from the internet without looking and haphazardly slapped on in Lesson II wasn't that optimized. It only takes three more seconds to give the result (totaling five seconds), and that's on an ARMv5 chip. ARM vee five! Isn't that awesome??

It's slow.

In the heat of the moment, her response hurt my ego so much that I even starter to explain what was going on. But I quickly shut up. Maybe I shouldn't be that defensive. Not only was I missing on an opportunity to better myself, it's just not attractive for a guy to be defensive. Acting manly seemed more important at the time so I swiftly moved on to other subjects, but her words stuck with me:

It's slow.

Gotta admit, she's right. Despite all the hard work I put into the Bot, it is a tad slow. I have excuses but it doesn't matter: that's a valuable lesson to learn.

Customers do not care that your machine learning project was hard!

Of course they don't care. People are surrounded with technology that, in the words of Arthur C. Clarke, looks "indistinguishable from magic".

Yes, most have no idea how to backpropagate gradients, but then they talk to (talk to!) their Google Home, and it knows what they're looking for before they even say it.

They can't explain why residual connections are a good idea for deep convolutional encoder-decoder architectures (to be honest, nobody can explain that; it just works, okay?) But they open Snapchat, and the filter swaps their face with their bff within a millisecond.

They can't appreciate that simple linear models can do wonders when thousands of features are thrown at them. But then they open Facebook, and it shows them stuff they don't like, but they still open it again and again... Well, that one doesn't sound as magical--but the picture they snapped with their phone looks better than an amateur photographer's attempts at art shot on a DSLR.

Then they open your product, and...

It's slow.

They're right. Don't ship crap.

Lessons from a Smiling Bot, II: Get Features from Production

Other lessons from a Smiling Bot

The lessons I've learned when building a Smiling Bot: an embedded CNN model for the deep learning for Visual Recognition course at Stanford.

The Smiling Bot I made for CS231n was performing okay, at least on paper. 40% recall at 70% precision for smile detection is not disasterous for an "embedded" model. When I showed it around to the colleagues in the office, however, the performace seemed way worse. 2 smile recognized out of about 15 attempts: that is not what 40% recall sounds like.

This time, however, I "logged features from production"--every capture taken was saved to internal memory for later inspection.

Turns out, the pictures of the actual users didn't quite look like the training data. Here's a dramatic reenactment (the verbal Terms of Service my users consented to didn't allow me to publish their images):

Whereas the pictures in my training dataset looked like this:

The training dataset images were universally sharper, have better dynamic range, and less noise. Yes, the augmentations I applied did add jitter and motion blur; they rotated and scaled, but there was no "destroy the shadows contrast" or "emulate the backlit subject" augmentation.

I should add one, retrain, and see what happens.

The Chicken and The Egg

"Logging features from production" is considered the cheapest and the most maintainable way to ensure the model trains of what it would actually face. Quoting Rule 29 from Google's "Best Practices for ML Engineering" post:

Rule #29: The best way to make sure that you train like you serve is to save the set of features used at serving time, and then pipe those features to a log to use them at training time.

Even if you can’t do this for every example, do it for a small fraction, such that you can verify the consistency between serving and training (see Rule #37). Teams that have made this measurement at Google were sometimes surprised by the results. YouTube home page switched to logging features at serving time with significant quality improvements and a reduction in code complexity, and many teams are switching their infrastructure as we speak.

They're correct. But there's the chicken and the egg problem here.

See, if I don't have a device with a trained model, I have no users! If I have no users, nobody is producing features for me to train. In order for my colleagues to play with my device, I had to have the device built. I probably could just flip a coin every time a user presses the button, but this approach wouldn't scale to many users.

But if I want my model to perform, I need to train on the production features. What came first, the chicken or the egg?

I think augmentations and feature engineering could be the answer here. But a more important lesson here, you can't avoid training on production features for early stages of a model development.

***

In one capture, the non-neural face detection pre-filter even considered the light fixture to be a more likely face than my own.

That's why we need neural networks, kids.

Lessons from a Smiling Bot, I: Training/Serving Skew, Second Opinion, CSV-based ML Infra

Most Machine Learning courses at Stanford require coming up with a relevant project and delivering it before the course deadline. For the courses I've passed earlier, I've built a similarity-based wine search algorithm and an automatic Advertisement generator. For the most recent course, deep learning for Visual Recognition, I've made a Smiling Bot: a Raspberry Pi device that detects when you smile and... smiles back.

Other lessons from a Smiling Bot

The lessons I've learned when building a Smiling Bot: an embedded CNN model for the deep learning for Visual Recognition course at Stanford.

The time constraints of the course only allowed me to build a simpler version. Sadly, it didn't perform well enough. On paper, everything was great: high precision (70%), acceptable recall (40%), and solid speed. This, in addition to a poster done well was enough to get an A. when deployed onto the actual device, the performance didn't seem to match the numbers.

There was more learning opportunities in this project, so I decided to dig deeper and make it work. I found and fixed some issues, but there's more to do. Here's what I learned.

Smiling Bot

I've always had a latent desire to make machines be more like humans. It's not just an abstract desire: a colleague of mine Allison, even worked at a company who made it their business! When the time came to choose a project for CS231, I asked my friends for ideas. A friend of mine, Melissa, suggested to explore emotion recognition... Thanks to all this inspiration, I just had to do something related.

So I challenged myself to make a small device that smiles back when you smile at it. (I think Melissa meant something about helping people who had trouble recognizing them, but we gotta start somewhere 🤷).

When you press on its nose, it "looks at you" with its eye-camera, runs a Convolutional Neural Network on device (no phoning home to the cloud!) and lights up the green "smile" if you were smiling. :-) That simple. To the left is the battery. The bot can feed off of it for about 4-5 hours.

Originally, inspired by the talk by Pete Warden and this device, I wanted to deploy it onto Arduino. But my camera interface didn't match, and I decided to just use the Raspberry Pi I had in my drawer.

(The nose pressing will be replaced with a proximity sensor real soon. This button is mostly for testing.)

Image classification (into "smiling / not smiling") is mostly a solved problem in deep learning. The challenge here was, can I make this inference run on low-power low-performance device without losing quality?

On paper, its performance was okay. Stellar ROC curves, 40% recall with 70% precision, what else can you wish for from a small, 2 Mb (!!!) model.

But in practice, the performance didn't seem to match the promised figures. Then, I started digging deeper.

Lesson 1A: Human Review requires a second opinion

Machine learning requires data labeled for humans to train the neural networks. Facial expession datasets are really hard to come by. And human labor is expensive. (That's why we're trying to train machines to be more like humans in the first place). So of course, doing an unfunded research project, I wanted to save on labeled data.

I decided to only ask one human to evaluate each picture. Big mistake.

  1. I didn't collect enough data. Only 6000 training examples plus 1000 for validation didn't seem like enough to train a 5-7-layer AlexNet.

  2. The data I collected was low quality. Based on a random sample of data, I found that the labels were wrong 14% of the time. I asked a simple question, "Is the person in the picture smiling?" with Yes/No/IDK options. Yet, 1 out of 7 pictures was labeled incorrectly.

The best way to mitigate it would be to ask 3 or 5 raters to rate every picture, and take the majority vote.

If I was cost-conscious, I'd only ask two raters the question, and simply discarded the answer where the two disagreed. Think about it: this strategy costs 14% more than just asking 2 raters (since we need to send more data), compared to a 50% increase if we ask 3 people.

Turk Pricing woes

Perhaps one of the reasons for the disappointing quality was that I batched the tasks together so that on one web page, a turker would answer questions about multiple images.

Amazon's pricing for Turk tasks disappointed me a bit. So first, how much to pay to the humans for an evaluation? The right way to measure is to target a certain pay per hour, estimate how many tasks per hour a human should do, and divide the two.

Say, we want the rater to earn \$20 / hour. I myself wrote a small piece of rating software and determined I can rate ~2000 pictures per hour. (At which point I should've just rated them myself, but I considered it an investment in my education). So I would pay \$20 / 2000 = \$0.01 per task.

But Amazon wants a cut of \$0.01 per one "task" minimum. I didn't want Amazon to get a 50% cut of my pay.

So I made the survey page contain 5 pictures per task, did "the right thing" by advertising it in the description, and made Amazon. Of course, inspired by the Amazon example, I took the opportunity to shortcharge the workers and pay $0.02 for 5 pictures.

However, the workers sent me the feedback (yep, I got feedback from the workers--apparently, that's a feature) that this batching broke their hotkeys. I haven't yet figured out how to fix.

It still cost me \$300, but in the grand scheme of things I've got my money's worth.

Lesson 1B: Training/Serving Skew

Training/Serving skew is a decrease in the performance of an ML model due to the unforeseen (and sometimes hidden) difference in data used for training and the data actually seen in production (when "serving" the users).

There arent' many materials on this concept. I might just be using the Google-invented name for it (e.g. see this blog post).

Interestingly, I already knew about this concept from work. When building a machine learning framework at work, experienced AI engineers warned me about training / serving skew. Now I also learned to look for it.

Essentially, I trained my model on Google Facial Expression Comparison Dataset. Each photograph has the bounding box of the face provided. Of course, I cropped the images to the bounding boxes.

And of course, when deployed on device, the pictures taken as is, without any face cropping.

Ok, I fixed it by adding the standard Viola-Jones face detector. There are pretrained detectors available. But then, it started taking ~1-2 seconds to run the face detector. My smallest smile detection model takes this much to run!

The performance improved. At least it was able to detect my own smile well, under the laboratory lighting conditions. In the wild, it worked on only 1 out of 3 subjects though.

Lesson 1C: Easy to use Machine Learning infra is lacking

In order to collect the data, I pretty much wrote my own database. Because a dataset rarely contains the features you need. You need to transform the dataset into Features. And then, you need to store the features. And then repeat this process, and wish you've had enough foresight and time to make it repeatable.

So here's an inventory of my tiny effort at ML infra:

  1. About 100 lines of Python to load/save/join/merge data into CSV (aka "the DB");
  2. About 250 lines of Python to download data and extract features from the Google Faces Dataset
  3. About 90 lines of Python to manage MTurk ratings.
  4. And 80 more lines of Python to store model evaluation / performance / profiling results.

Good think I make ML frameworks for a living, and I know what code to write and why.

And you know... it still seemed faster and more efficient than investing in learning an ML framework (such as MLFlow and friends).

More Lessons to Learn

Despite that I've mitigated the training/serving skew to some extent, there are more lessons to learn. I know that because whenever I tested my robot in the wild, the results were a bit subpar. Some directions that I'm going to explore now are:

  1. Should I just get more labeled data?
  2. And if I do, the dataset will no longer fit in RAM; should I learn a more advanced ML framework to help me manage that, or should I train from scratch?
  3. Did I reduce the first convolutional layer a bit too much?
  4. Is there something in the way people interace with the device promote skew? E.g. is the way people hold the machine or the lighting conditions representative of the training dataset?

Stay tuned!

What is the gradient of broadcasting?

"What is the gradient of numpy broadcasting?" I was wondering the other night. It's almost midnight, my brain is tired, and I got a bit stuck.

In short...

In numpy, broadcasting is adding a one-dimentional vector b2 to a 2-D matrix X, which implicitly adds b2 to every row of X.

To compute the derivative of the result over b2 (let's call it g), we rewrite the expression as matrix multiplication. Here, B2 is a matrix of shape (M,1), which is just b2 with one extra axis, and $1^{B\times 1}$ is the broadcasting matrix:

$$X+b_2 = X + 1^{B\times 1}B_2^\top$$

In order to backpropagate the upstream gradient $G$, it gets multiplied by the transposition of the broadcasting matrix:

$$g^\top = \frac{\partial L}{\partial h_2} = 1^{1\times B} G$$

Effectively, $1^{1\times B}$ acts as a summator along the first axis ($g_k = \sum_{i=1}^B G_{i, k}$), which can be expressed as

Let's explore how to get there in three different ways: the intuitive, the "compact form" and the "full form" tensor derivatives. But first, how did this even come about?

Midnight homework

Back then, I had just started my CS231n course studies as part of the Stanford SCPD certificate program. CS231n "Convolutional Neural Networks for Visual Recognition" is a great "starter" course if you study Deep Learning. We had to learn the basics and to manually implement and test quite a few backpropagation algoritms for simple network layers. I'll write a more in-depth overview later.

Importantly, we got to really understand vector operations at a really low level, and we had to use a general purpose math library numpy as oppsoed to Deep Learning-specific Tensorflow or Pytorch.

Broadcasting in a Fully-Connected Layer

When I just started CS231n, I was used to that framework-level thinking about deep learning algorithms, and it took a bit of effort to shake it off. When implementing a two-layer fully-connected neural network using numpy, you add bias vector to the input batch multiplied by the weight matrix:

Here:

  • X1 is a matrix of shape (B, N) where B is the batch size, and N the number of outputs from the first hidden layer.
  • W2 of shape (N, M) where M is the number of outputs from the second hidden layer;
  • b2 is the bias vector of shape (M).
  • h2 of shape (B, M) is the output of the layer for each example in the batch.

But wait. The result of matrix multiplication X1.dot(W2) is of shape (B, M) so how can you add a vector of shape (M) to this? In numpy, as well as in our lecture notes, that's called "broadcasting" and it's a convenient method of working with batches of data. The code above essentially means "add b2 to every row of X1.dot(W2)" Our lecture notes told us to "use broadcasting" and I did. Neat.

Later, I was implementing backprop and for that, I needed to know the derivative of $\frac{\partial h_2}{\partial b_2}$. That's when I got confused. What is the gradient of that "broadcasting"?

Intuitive derivation

We can ask a simpler question instead: how does $b_2$ contribute to the eventual loss $L$? Since it independently affects each example in the batch (and this means the impact onto each batch needs to be summed), we could simply compute the gradient of each example, and then sum the results along the batch axis. Tthat seemed too inventive to me at the time. Is there a more "mechanical" way to approach the task?

Broadcasting as matrix multiplication

To answer that, let's think how to represent broadcasting without knowing that we operate in batches. Apparently, something happens to that b2 vector before it's added to the product $X_1W_2$ , some function f is applied:

$$h_2 = X_1 W_2 + f(b_2)$$

In order for the addition to make sense, f should produce a matrix of shape (B, M) out of a vector of shape (M), such that each row of $f(b_2)$ equals $b_2$. Could it be matrix multiplication?

Indeed, if $f(b_2) = Zb_2^\top$, then Z is a matrix of shape (B, 1), and we can consider $b_2$ a matrix of shape (M, 1) (let's actually start calling it $B_2$ since a one-column matrix and a vector are two different things). It makes sense if Z is simply a vector of ones (as in np.ones((1, B))):

$$f(b_2) = 1^{B \times 1} B_2^\top$$

So, the expression for $h_2$ becomes something more manageable:

$$h_2 = X_1 W_2 + 1^{B\times 1}B_2^\top$$

My first instinct was to try to define $\frac{\partial h_2}{\partial B_2}$. As I later learned, that's not the most effective way: it only makes sense when operating on "full form" derivatives I'll describe below. Instead of directly computing the "gradient of broadcasting", we use it in backpropagation from the upstream gradient $G$:

$$\frac{\partial L}{\partial b_2} = \left(\frac{\partial L}{\partial B_2^\top}\right)^\top = \left(\underbrace{\frac{\partial L}{\partial h_2}}_{= G^\top} \frac{\partial h_2}{\partial B_2^\top}\right)^\top = \left(\frac{\partial f(B_2^\top)}{\partial B_2^\top}\right)^\top G = \left(1^{B \times 1}\right)^\top G = 1^{1 \times B}G$$

This is essentially the gradient of the broadcasting applied via the chain rule.

What does the expressions of $1^{1 \times B}G$ mean? If you look at the per-component layout of it, this expression effectively calculates the sum along the batch axis of the right-hand side matrix. Yep, the transposed "broadcasting" matrix becomes the summation matrix, and the end result will be:

$$g_k = \sum_{i=1}^{B}G_{i,k}$$

Compact and Full forms of tensor derivatives

Wait, why is $\frac{\partial L}{\partial h_2}=G^\top$ rather than $G$ ? And why did we end up with a matrix of shape (1, M) instead of the desired (M, 1)? Where all these transposition come from? Well, that's because we've actually been using the "compact form" instead of the "full form" of tensor derivatives.

Yep, the way we usually operate with tensor derivatives, is by using the "compact form", that omits quite a few zeros. You can read a great overview of this here in the CS231n lecture notes.

The actual "full" Jacobian of two matrices $\frac{\partial \mathbf{A}}{\partial \mathbf{B}}$ would be a four-dimensional array with the shape equivalent to $(a_1, a_2, b_2, b_1)$ (where $A \sim (a_1, a_2)$ and $B\sim(b_1, b_2)$--we're using $\sim$ for "is of shape" here). However, in deep neural network calculus, one of these dimensions is usually a batch dimention. And batches are independent from one another. So most of the elements in the "full Jacobian" would be 0 (read more here). That's why everyone uses a compact notation to only list the elements that matter. This requires you to be inventive and "get it". But for me to "get it" I had to write it out in the full form.

$$\frac{\partial h_2}{\partial b_2} \sim (B, M, M) = (1^{B} \otimes I^{M\times M})$$

you can see how most elements of this matrix (let's call it $M$) are indeed zeros, except that $M_{i, j, k}=1$ when $j=k$. In fact it only has M nonzero elements. This is consistent with that the "compact form" of $\frac{\partial \cdot}{\partial \mathbf{X}}$ is usually assumed to take shape of $\mathbf{X}$ itself. Compare the "full form":

$$G_{full} = \left[\frac{\partial L}{\partial h_2}\right]_{full} \sim (1, M, B) $$

with the "compact form" we're used to:

$$G = \left[\frac{\partial L}{\partial h_2}\right]_{compact} \sim (B, M) $$

Except for that extra axis, $G = G^\top_{full}$; that's where the transposition comes from.

Overall, one can indeed operate the "full" derivatives in a more algorithmic way, but sadly that would increase the number of dimensions and require way more computation than the compact form.

I need to learn a bit more about how to represent derivatives with tensors and their product before trying to write out the full form for the expression above. I'll update my post with this when I do: getting outer product for tensors with derivatives isn't straightforward.

Perhaps there's a way to both keep it "mechanised" and also simplify these expressions--let me know in the comments!

Traffic IV: How to Save 10 Minutes on your Commute, or the Mystery of the 101-92 Interchange

Driving on a highway is like picking stocks. There are multiple lanes, and as we established earlier, one of them is just faster than the others. Drivers take notice; they move into the lane just like the stock market players “move into” the stock of a promising company, buying its shares.

Then, the same thing happens as in the stock market: as more cars occupy the lane, it becomes more congested, and it slows down. The lane is no longer “profitable”, and other lanes start to outpace it. Recall the passive investment wisdom: no matter how hard you try, you will still move with the average speed of the flow (or, likely, worse). Unless you possess uncanny talent, or can see the future, or have insider knowledge, you have no advantage. On a freeway, talent won't help, and as for seeing the future: you barely see behind that truck just ahead.

But I do have some insider knowledge for you. How come some commuters drone it out for 15 minutes in bumper-to-bumper traffic and those few "in the know" zoom past them and beat them to the next exit? Read on–or skip to the solution.

Series on Car Traffic modeling

Inspired by my almost daily commute on California Highway 101, I explored the achievements of traffic theory and found answers to the most pressing mysteries of my commute.

Enter the intersection of Highway 101 and Highway 92.

Or, rather, enter the gigantic southbound traffic jam that “happens” there every single weekday, from 6.30am to 10am. The jam is so reliable that some atomic clocks use it as a backup time sync mechanism.

What happens there is simple. Three (!!!) different streams of traffic merge into this already clogged 4-lane highway: local 19th Ave, eastbound Highway 92, and westbound Highway 92, all full of techies anxious to make it to their 10am standup meeting. (Note that I might have mixed up the order of the on-ramps, but it's not important for our purposes.)

The small outlet onto 92 doesn’t help as few people travel in that direction during peak hours. Or… does it?

If you’ve been a diligent student of traffic laws in the last several posts, you can see what’s happening here. The interchange is an interplay between four traffic situations:

  1. right exit reducing the amount of traffic in the right lanes,
  2. a slowdown at a merge that is causing the blockage in the right lanes.
  3. "friction" that "carries over" the slowdown from the right lanes to the left lanes.
  4. faster left lane getting backed up before the blockage.

We know from Traffic II that if there's a blockage affecting all lanes, then the left (the faster) lane will be backed up more (this is pattern (4)):

(Here, the green circles mean "free flow", the yellow "mild congestion, and the red "severe congestion".)

What causes the blockage in the left lanes? It's the friction from the blockage in the right lanes. What causes that? The three freaking merges that dump San Mateo and East Bay commuters onto 101.

So the traffic jam unfolds the same way every day: patterns (1) and (2) make the right lanes more congested, (3) carries over the slowdown from the right lanes to the left, and (4) causes the slowdown in the right lanes way way before the interchange. Here's what the traffic looks like there:

The optimal strategy

So how do you beat the rat race? You got it: drive in the rightmost lane, and then merge left five times.

Indeed, in a simple two-lane case, the best way to traverse the blockage at the merge is to merge right just before the merge:

So when we apply this rule to the overall situation at the interchange, this would be the optimal strategy:

This doesn't seem like much, but that's because this model is very compressed. If we zoom out, the traffic would instead look like this:

It’s not easy and it is a bit stressful. Most prefer to just stay in the lane and relax, but if you want to get ahead in life, you gotta put some work in. Merging is tight: commuters do not appreciate the “freeloaders”, but it’s easier than it seems, just like the prior research shows.

A common mistake is to merge too early. Yes, I know you want to be conservative; yes, I know it's hard to patiently watch that traffic slows down ahead of you; you want to react, you want to merge left, away from the impeding congestion... This is a mistake. If you're in the right lane on 101, merge under the bridge, not before! Exercise patience. You're the traffic tiger next to a flock of antelopes. Wait.

Another trick, employed by some shuttle bus drivers is to exit the freeway and immediately enter it. However, this involves a traffic light, and in my experience it’s not as fast as five merges (but if you're driving a bus, merging is indeed more complicated).

Appendix

If you’re curious, I’ve filmed two videos of this interection.

Approach from Highway 101. Notice how three traffic lanes are merging into 101, and how motorists are merging leftwards. Before the overpass, the left lanes are more congested, and the traffic in the right lanes moves faster. However, as the camera approaches the overpass, left lanes pick up speed while the right lanes come to a standstil.

Approach from Highway 92. This time, we are merging into the right lanes that stall. Note how somewhat small amount of traffic coming off of Hwy 92 comes to a disproportinally slow merge at Hwy 101. You can also notice, at the time of merge that the left lanes are moving way faster than the merging lanes.

(Also some dude confused the shoulder with an extra lane... Oh if only!)

***

This concludes the traffic series. I originally wanted to write some simulations, but I realized there's already a large body of work.

If you're reading from outside of the US, you might be puzzled by what happens. In California, unlike on the East Coast or in Europe, drivers do not have a culture of staying in the right lane and using left lanes to pass only. Everyone basically drives wherever they want.

I de-lid Core i7 9700k and I have no idea if it was worth it

My old CPU was up for an upgrade (I've had it for 5 years), so I've built a new one. And since I've been studying Machine Learning, I needed way more computing power than I had.

When overclocking my system, I "de-lid" ("de-lidded"?) my CPU, replaced the thermal paste, upgraded to liquid cooling... What did I gain? Well, I have no freaking clue since I did not measure performance before statring. 🤦 Worst mistake ever. Don't repeat it.

What is my setup?

  • nVidia RTX 2080 Ti. nVidia GPUs are essential for machine learning these days, so I got an RTX 2080 Ti from EVGA (this one works great and will last me through the end of my studies).

  • Core i7-9700k, a 8-core 4.6 GHz "gaming" CPU. In my experience, a GPU-powered machine learning algorithm will likely not use many CPU cores; instead 1 core will be used a lot. That informed the choice of CPU, and Intel's Core i7-9700k benchmarking results from userbenchmark.com were solid (I've never been an AMD guy).

For a short period of time, I've got the most advanced consumer CPU setup... The RTX demos and "Metro: Exodus" game hame look amazing... and ironically I don't have any time to play them. 🤓

De-lidding Core i7-9700k

When researching CPUs, I found about "de-lidding": "opening up" the CPU itself to replace the internal thermal compound with something better. Some Intel CPUs, they say, shipped with shitty stock thermal paste that wasn't very efficient at heat transfer.

Core i7-9700k, however, ships with some sort of a metallic solder. How do I know that? Because I opened it up :-)

De-lidding was a well-established, well-documented, and well-researched procedure for older CPUs. There are even tools like "Delid die-mate" available to make it easier. My CPU was newer, so there was scant evidence, and most of it was recommending against de-lidding "for most users", e.g. this video.

But I still did it, and the worst part, I don't even know if it made any difference because I didn't measure earlier. I have excuses: I was busy with school and I didn't have time to measure anyway ("But you still had time to assemble a new PC, asshole!" says the voice in my head.)

Yes, my benchmark scores are through the roof, but they should've been through the roof to begin with.

Currently, I run my system at the following ("limiter" is why I didn't OC it further):

Component Stock Overclocked Δ Temperature under load Limiter
CPU core clock 4600 MHz 5000 MHz +400 MHz (+8.7%) 71 C temperature: 71C is a bit above the comfortable 60s range
CPU cache clock 4300 MHz 4700 MHz +300 MHz (+7.0%) 71 C I have no idea how to test so used the same speedup as above
GPU core clock 1935 MHz* 2065 MHz +130 MHz (+6.7%) 72 C Atomic Heart demo crashed at +140 (Fire Strike crashed at +150)
GPU memory clock 7000 Mhz 8200 MHz +1200 MHz (+17.0%) 72 C it kept working but I got scared...

* For some reason, the manufacturer website lists base core clock as 1635, but my benchmarking software as 1935. I'd trust the software. By the way, stock temp on my GPU was 66C under load.

I used Specy to measure the temperature, Precision X1 to overclock the GPU, Intel's shitty-looking tool for the CPU. I used Unigine and 3DMark benchmarks to load the GPU, and CPU-Z to load the CPU.

What did I learn?

CPU cooler matters

CPU cooler matters, and matters a lot. I initially thought that I screwed the CPU up by de-lidding. The temperatures under load were up to 92C (!!!) That's too much. Various overclocking forums recommend keeping an overclocked modern Intel CPU in the 60-s.

Replacing the basic cooler with a 240mm liquid cooling solution from Corsair helped.

That being said, I ran that CPU in the 80-85C range for days at a time (while training neural networks), and everything turned out fine. I could've just been lucky though.

Liquid cooling takes up space

Turns out, there's not enough space in my computer case for the 240mm liquid cooling heatsink and the fans. My case is "only" 18 inches tall; I guess I'm up for an unexpected upgrade. My CPU looks like it "vomited" a radiator.

Building PCs is tedious

It was exciting the first time, when I was a 12 year old boy. Today, turning a million screws seems no better than assembling an IKEA drawer set. Getting best performance by tyning overclocking "hyperparameters" and then putting the results in a spreadsheet looks way too much like work or school.

De-lidding in more details

Tinkering with the device in a manner described in this post will definitely void your warranty. I'm just sharing my experience; I'm not describing some manufacturer-recommended procedure. Do not attempt unless you know what you're doing and unless you're prepared to buy another CPU.

Here's what de-lidding looks like. You'll need the tools:

  • der8auer Delid Die Mate 2 to remove the lid.
  • Cardboard knife / razor / small kitchen knife to clean out the solder.
  • Paper towels.
  • really good "liquid metal" thermal compound because if not this, then why bother? Buy at least 2-3, because 1 pack is literally for 1 application, so if you screw it up (and the first time you will) you'll need more.
  • Medical tape or some other non-sticky tape (like a painter's tape) to protect circuitry from liquid metal.
  • A set of very fine sandpaper, 2000-3000 grit. This is to remove the solder and excess liquid metal when you screw up. You can skip this if you're very patient and careful with sharp instruments.
  • Silicone glue that can withstand heat and can glue metal (the CPU heatsink) back to the fiberglass CPU base (some say it's optional tho).
  • It's best if your toenails are due for a trim, as toenails is the best tool for Step 3. (You can also use a non-sharp metal object or a credit card).

Step 1: Plug the untampered CPU in, and measure

Measure your performance first. You will spend hours assembling (took me 2 hours), delidding (4 hours), and then overclocking (5 hours) your PC; spend 20 minutes testing your assembled PC first. Use this:

  • CPU-Z has a cpu load-test.
  • Specy will show you a temperature graph (click on those green squares).
  • Precision X1 will show GPU graphs (other use Afterburner)
  • userbenchmark.com offers some basic tests but I found them to have ~10% variance from test to test.
  • Even the free version of 3DMark will save your results online so you don't have to write them down.

Step 2: Remove the lid!

Take the CPU out, clean up the thermal paste from it (you have it on because you measured first, remember?). Now use the Die-Mate tool as instructed in the manual, or watch it on youtube.

You'll find that the lid covers literally nothing special. Just another black box.

Step 3: Clean out the thermal compound.

No, don't use your toenails yet. That's where you take out your exacto knife or a razor, and slowly and patiently scrub it out. That's where you also understand that this whole thing is a waste of time, and you wish you hadn't done it. Too late.

Once you've removed most of it, use fine (3000 grit or more) sandpaper to clean out the rest.

If your're re-doing your prior work and scrubbing out liquid metal, be very careful! Do step 5 first (add tape), and then wipe it out with a paper towel. Be generous and never swipe the same side of the towel again: liquid metal will go right back.

Step 4: Clean out the silicon glue using your toenails or a non-sharp object

Toenails will just work, and a knife might damage the small metal components under it. Credit cards, or non-sharp metal or sturdy pieces (find some case component you aren't using) come handy too.

Oh, and you can use sandpaper (this time a coarser one, like 300 grit) on the heat spreader.

Step 5: add tape around the die

Liquid metal is conductive so it's extremely important none of it gets into the contact with circuitry (actually it'll be likely fine if you spill just a bit and wipe it carefully, but the tape is essential).

Some people suggest covering the surface with nail polish instead (wat?). I have not tried it as I don't polish my nails typically. ;-P

Step 6: add a little bit of liquid metal

Liquid metal works in tiny amounts (see instruction here) Very slowly and carefully push out about a quarter-millimeter sized ball (use the needle-like tip). Practice on a paper towel first. Use the q-tip that comes in the box to spread it out.

On this picture I added a bit too much on the chip (it worked though) and a bit too little on the heatsink:

Keep adding little bits and spreading. It will sometimes seem that you squeeze it out, and nothing changes, but the added metal just makes the "pool" a tiny bit taller. You don't want it like a "pool" at all: it should be messy a bit, and it should not go "up" at the edges not "reflect" like a mirror.

Step 7: Remove the tape

The tape must come off easily. If you added too much metal, siphon it back into the syringe using the other tip.

Step 8: Add a tiny bit of silicone glue to all edges of the heatsink

Put it on all edges the same way it was before. Use small amount (1mm tall at most), but it's ok since it's not conductive.

Tip: cut the gasket diagonally with scissors so it's easier to spread on a horizontal surface.

Step 9: glue it back using the delid tool

Glue it back. The instructions tell you to be careful as to not damage the CPU. I found I was too careful thefirst time around: it ended up a bit squishy. So the next time I screwed it in until it didn't go any more.

Wait for... idk, I left it overnight. Clean out the excess glue from the outside with a paperclip so it's nice and tidy.

Step 10, optional: photograph the top of the CPU (serial number and stuff)

If you have some liquid metal left and you want to use it between the cooler and the CPU, then you're seeing these numbers at the top of the CPU for the last time. Liquid metal is nasty; it binds to other metal and is impossible to clean. You can sand it off, but you'll be sanding off the top layer of the paint.

Step 11: put it back in, measure the results, and tell me what you got the comments!

Because I have no clue if it all was worth it. Tom's Hardware says it would be, but others don't seem to think so.

Spring Break!

This post is superseded by a more exhaustive "I'm now Stanford Official" which you should read instead. This post was just a short update halfway through the program.

One might notice that the posts in my blog kind of follow the school schedule. Last time I posted around Christmas, and this post was written at the end of March. Well, that’s because I am on a spring break.

I’ve enrolled in a Stanford Graduate education program on Artificial Intelligence through SCPD, Stanford Center for Professional Development. As part of the program, you take the same courses as Stanford’s CS students do and--if you get good grades--you earn a certificate, which is not quite a degree but still something you can frema. You might also keep it rolling until you get a Master’s; many find it hard to resist.

The coolest part of this is that you get to learn from the actual pros who’s advancing the field. I took classes taught by Percy Liang and Christopher Manning. Yesterday we were just reading their papers, and now look, I can see them teach a class! Make sure to check the course catalog if the big names are what you’re after here. Also, some report that in some courses the head instructor doesn’t really appear much anymore, and gives one or two presentations over the video.

It takes a lot of hard work. The assignments take surprisingly long time to complete, 10-20 hours a week; either they are really hard, or I’m a bit slow (or both). But gosh is it worth it. I’m impressed by the quality of American education (this is my first experience at an American university). Teaching Assistants challenge and push you a lot, and thus help you learn.

Add course projects and an occasional midterm here and there, and so you have a second full-time job.

So far I’ve completed two courses:

  • CS224n: Natural Language Understanding (NLP) with Deep learning (RNNs / CNNs, word vectors, sentence parsing);
  • CS221: a course on a wide range of AI topics CS221 featuring Search, Reinforcement Learning, Markov Decision processes, and other basic tools of AI. The “overview” nature of the course is a trap: I still had to pass a midterm that requires deep understanding of all of these.

I now have a divided impression about AI and Machine Learning in general. On one hand, Machine Learning has long history and deep roots in logic and statistics. That’s something you can’t get from the hyped up articles in the media but only through studies. These neural networks and their architectures are actually closely tied with “non-neural” function optimization. On the other hand, there is a lot of stuff I can't quite yet grasp: why do these networks work so well? What are their limits?

I'll get a better feel of it as It’s still not over: I’ve just completed my 2nd course out of 4, and I’ll soon embark on learning about Computer Vision. However, the education and inspiration I received even without the paperwork to boast about is priceless.

If you do choose to enroll, sign up early. The courses fill up the week the enrollment opens – see the dates about "open for course enrollment" in the Academic Calendar, and the quarters in which the courses are "offered" in the course catalog

But if getting degrees is not your thing, many courses are available online, e.g. here's the cs224n lectures

Meanwhile, stay tuned for a couple of posts I've written over the spring break: the completion of the Highway series, and some experiments with hardware,

Traffic III: Why doesn't Changing Lanes Always Work?

Recall in the previous post, the lanes with more traffic flow (with more cars per minute passing a certain point) back up more when there's congestion. This can be summarized by the following picture:

The pressing question from that post was, why not just change lanes to the faster one, as you're approaching the congestion?

Series on Car Traffic modeling

Inspired by my almost daily commute on California Highway 101, I explored the achievements of traffic theory and found answers to the most pressing mysteries of my commute.

Modeling timing in lane changes.

The short answer is: if you already got stuck in a slow lane, and the next lane over is faster, then it's pretty hard to merge into it due to the speed difference.

Many of the traffic models I read about do not consider the distribution of cars across the traffic lanes, so I couldn't find readily available research for multi-lane traffic. Let's speculate about this, just for the fun of it. This is not based on any of the existing models, and I'm sure there are some.

A car can merge if the total time it takes to (a) physically drive to the other lane and (b) accelerate to the speed of the traffic in that lane, is smaller than (c) the time that passes between two cars in the other lane passing the same point.

$T_a$ is actually the time it takes to physically travel the width of the lane w, which is either distance over the horizontal component of the velocity vector (assuming $\alpha$ is the entry angle). Alternatively, if the initial velocity is low, the acceleration component takes over. Don't discount that, traffic lanes are unexpectedly wide (just like the white stripes are unexpectedly long).

$$\begin{align*} T_a =& \frac{w}{v_3\cos\alpha}, T_a' = \sqrt{\frac{2w\cos\alpha'}{a}} \\ T_b =& \frac{v_2 - v_3}{a} \\ T_c =& \frac{1}{q_2} \end{align*}$$

Actually, we should put $T_b$ to somewhere between 0.5x and 1x of what it is, because in a forced merge, the other driver will be decelerating to prevent a collision but we need to factor their reaction time..

We could just leave it here, and say that merging requires the following condition:

$$T_c > T_a + T_b$$

But note that as the lane you're in (lane 3) slows down, $T_b$ becomes larger ($v_3$ decreses), while other parameters stay at the same level. So the lesson from this is change lanes before you're stuck; otherwise it might be too late.

But merges like that stil occur; why?

On a real highway, while $T_a$ and to a large extent $T_b$ (except for the trick described in the side note) are not under the control of the driver, $T_c$ is not actually constant. Temporal short-term fluctuations are sometimes sufficient to merge out of the slow lane, and their variance is larger in slower traffic.

How can a merging driver control $T_b$ in bumper-to-bumper traffic when the lane moves with speed $v_{max}$ which seemingly must equal $v_2$? So it might seem that $T_b > \frac{v_{max}}{a}$. There's a trick. In order to cheat this equation, when the driver in front of me accelerates, I might just wait a bit, leave myself a larger gap, and achieve speed larger than the average $v_{max}$ before starting to merge.

In other words, merging out of a slow lane requires waiting a bit to decrease $T_b$. Note that the gap doesn't have to be large (it's a small multiple of the lane width w), so the cars merging into this gap from an even slower lane shouldn't be a problem.

A sensible assumption would be that the number of cars passing in the other lane, into which we want to merge, within a given unit of time follows a Poisson distribution with $\lambda = q_2$ (where $q_2$ is the flow measure of the other lane). Therefore, the probabiltiy there'll be 0 cars passing within the time it takes to merge $T = T_a + T_b$ can be approximated as $p = q_2T\cdot e^{-q_2T}$. If $q_2 = 40\ \text{cars/min}$, and $T = 2\ \text{sec}$, then p $\approx 0.35$.

It gets more complicated when we consider repeated observations though. The main point is, merging in heavy traffic can be modeled using simple mathematical equations.

"Friction", or modeling the impact of slow lanes onto the fast ones.

The result from the previous section, is that a merge from the slow lane into the fast lane will happen with high probability. What effect does it have on the speed in the "fast" lane?

The impact I observed was that drivers either slow down or merge out of the lane adjacent to a stopped or a slow lane. I think it stems from a subconscious expectation that some driver will take advantage of the algorithm described above. Remember, the merging driver strived to decrease $T_b$.

$$T_b = \frac{v_2-v_3}{a + b}$$

Indeed, if the merging drivers expect that $b \ne 0$, then the drivers in the target lane would naturally try to abvoid decelerating, and avoid being in that lane. I noticed this effect first when driving next to interchanges with congested exits, such as 580 West-80 East or 101 North-SR 109 North.

This naturally maps to a concept of "viscosity", and is thus similar to the effects found in laminar flow.

So in some regards, the traffic flow does look like laminar flow after all.

***

In the next post, I'll share videos of a traffic situation I repeatedly observe on the road and I'll maybe attach a short commute guide for the Silicon Valley commuter.