Lessons from a Smiling Bot. Lesson II: Get Features from Production

Other lessons from a Smiling Bot

The lessons I've learned when building a Smiling Bot: an embedded CNN model for the deep learning for Visual Recognition course at Stanford.

The Smiling Bot I made for CS231n was performing okay, at least on paper. 40% recall at 70% precision for smile detection is not disasterous for an "embedded" model. When I showed it around to the colleagues in the office, however, the performace seemed way worse. 2 smile recognized out of about 15 attempts: that is not what 40% recall sounds like.

This time, however, I "logged features from production"--every capture taken was saved to internal memory for later inspection.

Turns out, the pictures of the actual users didn't quite look like the training data. Here's a dramatic reenactment (the verbal Terms of Service my users consented to didn't allow me to publish their images):

Whereas the pictures in my training dataset looked like this:

The training dataset images were universally sharper, have better dynamic range, and less noise. Yes, the augmentations I applied did add jitter and motion blur; they rotated and scaled, but there was no "destroy the shadows contrast" or "emulate the backlit subject" augmentation.

I should add one, retrain, and see what happens.

The Chicken and The Egg

"Logging features from production" is considered the cheapest and the most maintainable way to ensure the model trains of what it would actually face. Quoting Rule 29 from Google's "Best Practices for ML Engineering" post:

Rule #29: The best way to make sure that you train like you serve is to save the set of features used at serving time, and then pipe those features to a log to use them at training time.

Even if you can’t do this for every example, do it for a small fraction, such that you can verify the consistency between serving and training (see Rule #37). Teams that have made this measurement at Google were sometimes surprised by the results. YouTube home page switched to logging features at serving time with significant quality improvements and a reduction in code complexity, and many teams are switching their infrastructure as we speak.

They're correct. But there's the chicken and the egg problem here.

See, if I don't have a device with a trained model, I have no users! If I have no users, nobody is producing features for me to train. In order for my colleagues to play with my device, I had to have the device built. I probably could just flip a coin every time a user presses the button, but this approach wouldn't scale to many users.

But if I want my model to perform, I need to train on the production features. What came first, the chicken or the egg?

I think augmentations and feature engineering could be the answer here. But a more important lesson here, you can't avoid training on production features for early stages of a model development.

***

In one capture, the non-neural face detection pre-filter even considered the light fixture to be a more likely face than my own.

That's why we need neural networks, kids.

Lessons from a Smiling Bot. Lesson I: Training/Serving Skew, Second Opinion, CSV-based ML Infra

Most Machine Learning courses at Stanford require coming up with a relevant project and delivering it before the course deadline. For the courses I've passed earlier, I've built a similarity-based wine search algorithm and an automatic Advertisement generator. For the most recent course, deep learning for Visual Recognition, I've made a Smiling Bot: a Raspberry Pi device that detects when you smile and... smiles back.

Other lessons from a Smiling Bot

The lessons I've learned when building a Smiling Bot: an embedded CNN model for the deep learning for Visual Recognition course at Stanford.

The time constraints of the course only allowed me to build a simpler version. Sadly, it didn't perform well enough. On paper, everything was great: high precision (70%), acceptable recall (40%), and solid speed. This, in addition to a poster done well was enough to get an A. when deployed onto the actual device, the performance didn't seem to match the numbers.

There was more learning opportunities in this project, so I decided to dig deeper and make it work. I found and fixed some issues, but there's more to do. Here's what I learned.

Smiling Bot

I always wanted to make machines be more like humans. A friend of mine, Melissa, suggested to explore the emotional recognition part.

So I challenged myself to make a small device that smiles back when you smile at it. (I think Melissa meant something about helping people who had trouble recognizing them, but we gotta start somewhere 🤷).

When you press on its nose, it "looks at you" with its eye-camera, runs a Convolutional Neural Network on device (no phoning home to the cloud!) and lights up the green "smile" if you were smiling. :-) That simple. To the left is the battery. The bot can feed off of it for about 4-5 hours.

Originally, inspired by the talk by Pete Warden and this device, I wanted to deploy it onto Arduino. But my camera interface didn't match, and I decided to just use the Raspberry Pi I had in my drawer.

(The nose pressing will be replaced with a proximity sensor real soon. This button is mostly for testing.)

Image classification (into "smiling / not smiling") is mostly a solved problem in deep learning. The challenge here was, can I make this inference run on low-power low-performance device without losing quality?

On paper, its performance was okay. Stellar ROC curves, 40% recall with 70% precision, what else can you wish for from a small, 2 Mb (!!!) model.

But in practice, the performance didn't seem to match the promised figures. Then, I started digging deeper.

Lesson 1A: Human Review requires a second opinion

Machine learning requires data labeled for humans to train the neural networks. Facial expession datasets are really hard to come by. And human labor is expensive. (That's why we're trying to train machines to be more like humans in the first place). So of course, doing an unfunded research project, I wanted to save on labeled data.

I decided to only ask one human to evaluate each picture. Big mistake.

  1. I didn't collect enough data. Only 6000 training examples plus 1000 for validation didn't seem like enough to train a 5-7-layer AlexNet.

  2. The data I collected was low quality. Based on a random sample of data, I found that the labels were wrong 14% of the time. I asked a simple question, "Is the person in the picture smiling?" with Yes/No/IDK options. Yet, 1 out of 7 pictures was labeled incorrectly.

The best way to mitigate it would be to ask 3 or 5 raters to rate every picture, and take the majority vote.

If I was cost-conscious, I'd only ask two raters the question, and simply discarded the answer where the two disagreed. Think about it: this strategy costs 14% more than just asking 2 raters (since we need to send more data), compared to a 50% increase if we ask 3 people.

Turk Pricing woes

Perhaps one of the reasons for the disappointing quality was that I batched the tasks together so that on one web page, a turker would answer questions about multiple images.

Amazon's pricing for Turk tasks disappointed me a bit. So first, how much to pay to the humans for an evaluation? The right way to measure is to target a certain pay per hour, estimate how many tasks per hour a human should do, and divide the two.

Say, we want the rater to earn \$20 / hour. I myself wrote a small piece of rating software and determined I can rate ~2000 pictures per hour. (At which point I should've just rated them myself, but I considered it an investment in my education). So I would pay \$20 / 2000 = \$0.01 per task.

But Amazon wants a cut of \$0.01 per one "task" minimum. I didn't want Amazon to get a 50% cut of my pay.

So I made the survey page contain 5 pictures per task, did "the right thing" by advertising it in the description, and made Amazon. Of course, inspired by the Amazon example, I took the opportunity to shortcharge the workers and pay $0.02 for 5 pictures.

However, the workers sent me the feedback (yep, I got feedback from the workers--apparently, that's a feature) that this batching broke their hotkeys. I haven't yet figured out how to fix.

It still cost me \$300, but in the grand scheme of things I've got my money's worth.

Lesson 1B: Training/Serving Skew

Training/Serving skew is a decrease in the performance of an ML model due to the unforeseen (and sometimes hidden) difference in data used for training and the data actually seen in production (when "serving" the users).

There arent' many materials on this concept. I might just be using the Google-invented name for it (e.g. see this blog post).

Interestingly, I already knew about this concept from work. When building a machine learning framework at work, experienced AI engineers warned me about training / serving skew. Now I also learned to look for it.

Essentially, I trained my model on Google Facial Expression Comparison Dataset. Each photograph has the bounding box of the face provided. Of course, I cropped the images to the bounding boxes.

And of course, when deployed on device, the pictures taken as is, without any face cropping.

Ok, I fixed it by adding the standard Viola-Jones face detector. There are pretrained detectors available. But then, it started taking ~1-2 seconds to run the face detector. My smallest smile detection model takes this much to run!

The performance improved. At least it was able to detect my own smile well, under the laboratory lighting conditions. In the wild, it worked on only 1 out of 3 subjects though.

Lesson 1C: Easy to use Machine Learning infra is lacking

In order to collect the data, I pretty much wrote my own database. Because a dataset rarely contains the features you need. You need to transform the dataset into Features. And then, you need to store the features. And then repeat this process, and wish you've had enough foresight and time to make it repeatable.

So here's an inventory of my tiny effort at ML infra:

  1. About 100 lines of Python to load/save/join/merge data into CSV (aka "the DB");
  2. About 250 lines of Python to download data and extract features from the Google Faces Dataset
  3. About 90 lines of Python to manage MTurk ratings.
  4. And 80 more lines of Python to store model evaluation / performance / profiling results.

Good think I make ML frameworks for a living, and I know what code to write and why.

And you know... it still seemed faster and more efficient than investing in learning an ML framework (such as MLFlow and friends).

More Lessons to Learn

Despite that I've mitigated the training/serving skew to some extent, there are more lessons to learn. I know that because whenever I tested my robot in the wild, the results were a bit subpar. Some directions that I'm going to explore now are:

  1. Should I just get more labeled data?
  2. And if I do, the dataset will no longer fit in RAM; should I learn a more advanced ML framework to help me manage that, or should I train from scratch?
  3. Did I reduce the first convolutional layer a bit too much?
  4. Is there something in the way people interace with the device promote skew? E.g. is the way people hold the machine or the lighting conditions representative of the training dataset?

Stay tuned!

What is the gradient of broadcasting?

"What is the gradient of numpy broadcasting?" I was wondering the other night. It's almost midnight, my brain is tired, and I got a bit stuck.

In short...

In numpy, broadcasting is adding a one-dimentional vector b2 to a 2-D matrix X, which implicitly adds b2 to every row of X.

To compute the derivative of the result over b2 (let's call it g), we rewrite the expression as matrix multiplication. Here, B2 is a matrix of shape (M,1), which is just b2 with one extra axis, and $1^{B\times 1}$ is the broadcasting matrix:

$$X+b_2 = X + 1^{B\times 1}B_2^\top$$

In order to backpropagate the upstream gradient $G$, it gets multiplied by the transposition of the broadcasting matrix:

$$g^\top = \frac{\partial L}{\partial h_2} = 1^{1\times B} G$$

Effectively, $1^{1\times B}$ acts as a summator along the first axis ($g_k = \sum_{i=1}^B G_{i, k}$), which can be expressed as

Let's explore how to get there in three different ways: the intuitive, the "compact form" and the "full form" tensor derivatives. But first, how did this even come about?

Midnight homework

Back then, I had just started my CS231n course studies as part of the Stanford SCPD certificate program. CS231n "Convolutional Neural Networks for Visual Recognition" is a great "starter" course if you study Deep Learning. We had to learn the basics and to manually implement and test quite a few backpropagation algoritms for simple network layers. I'll write a more in-depth overview later.

Importantly, we got to really understand vector operations at a really low level, and we had to use a general purpose math library numpy as oppsoed to Deep Learning-specific Tensorflow or Pytorch.

Broadcasting in a Fully-Connected Layer

When I just started CS231n, I was used to that framework-level thinking about deep learning algorithms, and it took a bit of effort to shake it off. When implementing a two-layer fully-connected neural network using numpy, you add bias vector to the input batch multiplied by the weight matrix:

Here:

  • X1 is a matrix of shape (B, N) where B is the batch size, and N the number of outputs from the first hidden layer.
  • W2 of shape (N, M) where M is the number of outputs from the second hidden layer;
  • b2 is the bias vector of shape (M).
  • h2 of shape (B, M) is the output of the layer for each example in the batch.

But wait. The result of matrix multiplication X1.dot(W2) is of shape (B, M) so how can you add a vector of shape (M) to this? In numpy, as well as in our lecture notes, that's called "broadcasting" and it's a convenient method of working with batches of data. The code above essentially means "add b2 to every row of X1.dot(W2)" Our lecture notes told us to "use broadcasting" and I did. Neat.

Later, I was implementing backprop and for that, I needed to know the derivative of $\frac{\partial h_2}{\partial b_2}$. That's when I got confused. What is the gradient of that "broadcasting"?

Intuitive derivation

We can ask a simpler question instead: how does $b_2$ contribute to the eventual loss $L$? Since it independently affects each example in the batch (and this means the impact onto each batch needs to be summed), we could simply compute the gradient of each example, and then sum the results along the batch axis. Tthat seemed too inventive to me at the time. Is there a more "mechanical" way to approach the task?

Broadcasting as matrix multiplication

To answer that, let's think how to represent broadcasting without knowing that we operate in batches. Apparently, something happens to that b2 vector before it's added to the product $X_1W_2$ , some function f is applied:

$$h_2 = X_1 W_2 + f(b_2)$$

In order for the addition to make sense, f should produce a matrix of shape (B, M) out of a vector of shape (M), such that each row of $f(b_2)$ equals $b_2$. Could it be matrix multiplication?

Indeed, if $f(b_2) = Zb_2^\top$, then Z is a matrix of shape (B, 1), and we can consider $b_2$ a matrix of shape (M, 1) (let's actually start calling it $B_2$ since a one-column matrix and a vector are two different things). It makes sense if Z is simply a vector of ones (as in np.ones((1, B))):

$$f(b_2) = 1^{B \times 1} B_2^\top$$

So, the expression for $h_2$ becomes something more manageable:

$$h_2 = X_1 W_2 + 1^{B\times 1}B_2^\top$$

My first instinct was to try to define $\frac{\partial h_2}{\partial B_2}$. As I later learned, that's not the most effective way: it only makes sense when operating on "full form" derivatives I'll describe below. Instead of directly computing the "gradient of broadcasting", we use it in backpropagation from the upstream gradient $G$:

$$\frac{\partial L}{\partial b_2} = \left(\frac{\partial L}{\partial B_2^\top}\right)^\top = \left(\underbrace{\frac{\partial L}{\partial h_2}}_{= G^\top} \frac{\partial h_2}{\partial B_2^\top}\right)^\top = \left(\frac{\partial f(B_2^\top)}{\partial B_2^\top}\right)^\top G = \left(1^{B \times 1}\right)^\top G = 1^{1 \times B}G$$

This is essentially the gradient of the broadcasting applied via the chain rule.

What does the expressions of $1^{1 \times B}G$ mean? If you look at the per-component layout of it, this expression effectively calculates the sum along the batch axis of the right-hand side matrix. Yep, the transposed "broadcasting" matrix becomes the summation matrix, and the end result will be:

$$g_k = \sum_{i=1}^{B}G_{i,k}$$

Compact and Full forms of tensor derivatives

Wait, why is $\frac{\partial L}{\partial h_2}=G^\top$ rather than $G$ ? And why did we end up with a matrix of shape (1, M) instead of the desired (M, 1)? Where all these transposition come from? Well, that's because we've actually been using the "compact form" instead of the "full form" of tensor derivatives.

Yep, the way we usually operate with tensor derivatives, is by using the "compact form", that omits quite a few zeros. You can read a great overview of this here in the CS231n lecture notes.

The actual "full" Jacobian of two matrices $\frac{\partial \mathbf{A}}{\partial \mathbf{B}}$ would be a four-dimensional array with the shape equivalent to $(a_1, a_2, b_2, b_1)$ (where $A \sim (a_1, a_2)$ and $B\sim(b_1, b_2)$--we're using $\sim$ for "is of shape" here). However, in deep neural network calculus, one of these dimensions is usually a batch dimention. And batches are independent from one another. So most of the elements in the "full Jacobian" would be 0 (read more here). That's why everyone uses a compact notation to only list the elements that matter. This requires you to be inventive and "get it". But for me to "get it" I had to write it out in the full form.

$$\frac{\partial h_2}{\partial b_2} \sim (B, M, M) = (1^{B} \otimes I^{M\times M})$$

you can see how most elements of this matrix (let's call it $M$) are indeed zeros, except that $M_{i, j, k}=1$ when $j=k$. In fact it only has M nonzero elements. This is consistent with that the "compact form" of $\frac{\partial \cdot}{\partial \mathbf{X}}$ is usually assumed to take shape of $\mathbf{X}$ itself. Compare the "full form":

$$G_{full} = \left[\frac{\partial L}{\partial h_2}\right]_{full} \sim (1, M, B) $$

with the "compact form" we're used to:

$$G = \left[\frac{\partial L}{\partial h_2}\right]_{compact} \sim (B, M) $$

Except for that extra axis, $G = G^\top_{full}$; that's where the transposition comes from.

Overall, one can indeed operate the "full" derivatives in a more algorithmic way, but sadly that would increase the number of dimensions and require way more computation than the compact form.

I need to learn a bit more about how to represent derivatives with tensors and their product before trying to write out the full form for the expression above. I'll update my post with this when I do: getting outer product for tensors with derivatives isn't straightforward.

Perhaps there's a way to both keep it "mechanised" and also simplify these expressions--let me know in the comments!

Traffic IV: How to Save 10 Minutes on your Commute, or the Mystery of the 101-92 Interchange

Driving on a highway is like picking stocks. There are multiple lanes, and as we established earlier, one of them is just faster than the others. Drivers take notice; they move into the lane just like the stock market players “move into” the stock of a promising company, buying its shares.

Then, the same thing happens as in the stock market: as more cars occupy the lane, it becomes more congested, and it slows down. The lane is no longer “profitable”, and other lanes start to outpace it. Recall the passive investment wisdom: no matter how hard you try, you will still move with the average speed of the flow (or, likely, worse). Unless you possess uncanny talent, or can see the future, or have insider knowledge, you have no advantage. On a freeway, talent won't help, and as for seeing the future: you barely see behind that truck just ahead.

But I do have some insider knowledge for you. How come some commuters drone it out for 15 minutes in bumper-to-bumper traffic and those few "in the know" zoom past them and beat them to the next exit? Read on–or skip to the solution.

Series on Car Traffic modeling

Inspired by my almost daily commute on California Highway 101, I explored the achievements of traffic theory and found answers to the most pressing mysteries of my commute.

Enter the intersection of Highway 101 and Highway 92.

Or, rather, enter the gigantic southbound traffic jam that “happens” there every single weekday, from 6.30am to 10am. The jam is so reliable that some atomic clocks use it as a backup time sync mechanism.

What happens there is simple. Three (!!!) different streams of traffic merge into this already clogged 4-lane highway: local 19th Ave, eastbound Highway 92, and westbound Highway 92, all full of techies anxious to make it to their 10am standup meeting. (Note that I might have mixed up the order of the on-ramps, but it's not important for our purposes.)

The small outlet onto 92 doesn’t help as few people travel in that direction during peak hours. Or… does it?

If you’ve been a diligent student of traffic laws in the last several posts, you can see what’s happening here. The interchange is an interplay between four traffic situations:

  1. right exit reducing the amount of traffic in the right lanes,
  2. a slowdown at a merge that is causing the blockage in the right lanes.
  3. "friction" that "carries over" the slowdown from the right lanes to the left lanes.
  4. faster left lane getting backed up before the blockage.

We know from Traffic II that if there's a blockage affecting all lanes, then the left (the faster) lane will be backed up more (this is pattern (4)):

(Here, the green circles mean "free flow", the yellow "mild congestion, and the red "severe congestion".)

What causes the blockage in the left lanes? It's the friction from the blockage in the right lanes. What causes that? The three freaking merges that dump San Mateo and East Bay commuters onto 101.

So the traffic jam unfolds the same way every day: patterns (1) and (2) make the right lanes more congested, (3) carries over the slowdown from the right lanes to the left, and (4) causes the slowdown in the right lanes way way before the interchange. Here's what the traffic looks like there:

The optimal strategy

So how do you beat the rat race? You got it: drive in the rightmost lane, and then merge left five times.

Indeed, in a simple two-lane case, the best way to traverse the blockage at the merge is to merge right just before the merge:

So when we apply this rule to the overall situation at the interchange, this would be the optimal strategy:

This doesn't seem like much, but that's because this model is very compressed. If we zoom out, the traffic would instead look like this:

It’s not easy and it is a bit stressful. Most prefer to just stay in the lane and relax, but if you want to get ahead in life, you gotta put some work in. Merging is tight: commuters do not appreciate the “freeloaders”, but it’s easier than it seems, just like the prior research shows.

A common mistake is to merge too early. Yes, I know you want to be conservative; yes, I know it's hard to patiently watch that traffic slows down ahead of you; you want to react, you want to merge left, away from the impeding congestion... This is a mistake. If you're in the right lane on 101, merge under the bridge, not before! Exercise patience. You're the traffic tiger next to a flock of antelopes. Wait.

Another trick, employed by some shuttle bus drivers is to exit the freeway and immediately enter it. However, this involves a traffic light, and in my experience it’s not as fast as five merges (but if you're driving a bus, merging is indeed more complicated).

Appendix

If you’re curious, I’ve filmed two videos of this interection.

Approach from Highway 101. Notice how three traffic lanes are merging into 101, and how motorists are merging leftwards. Before the overpass, the left lanes are more congested, and the traffic in the right lanes moves faster. However, as the camera approaches the overpass, left lanes pick up speed while the right lanes come to a standstil.

Approach from Highway 92. This time, we are merging into the right lanes that stall. Note how somewhat small amount of traffic coming off of Hwy 92 comes to a disproportinally slow merge at Hwy 101. You can also notice, at the time of merge that the left lanes are moving way faster than the merging lanes.

(Also some dude confused the shoulder with an extra lane... Oh if only!)

***

This concludes the traffic series. I originally wanted to write some simulations, but I realized there's already a large body of work.

If you're reading from outside of the US, you might be puzzled by what happens. In California, unlike on the East Coast or in Europe, drivers do not have a culture of staying in the right lane and using left lanes to pass only. Everyone basically drives wherever they want.

I de-lid Core i7 9700k and I have no idea if it was worth it

My old CPU was up for an upgrade (I've had it for 5 years), so I've built a new one. And since I've been studying Machine Learning, I needed way more computing power than I had.

When overclocking my system, I "de-lid" ("de-lidded"?) my CPU, replaced the thermal paste, upgraded to liquid cooling... What did I gain? Well, I have no freaking clue since I did not measure performance before statring. 🤦 Worst mistake ever. Don't repeat it.

What is my setup?

  • nVidia RTX 2080 Ti. nVidia GPUs are essential for machine learning these days, so I got an RTX 2080 Ti from EVGA (this one works great and will last me through the end of my studies).

  • Core i7-9700k, a 8-core 4.6 GHz "gaming" CPU. In my experience, a GPU-powered machine learning algorithm will likely not use many CPU cores; instead 1 core will be used a lot. That informed the choice of CPU, and Intel's Core i7-9700k benchmarking results from userbenchmark.com were solid (I've never been an AMD guy).

For a short period of time, I've got the most advanced consumer CPU setup... The RTX demos and "Metro: Exodus" game hame look amazing... and ironically I don't have any time to play them. 🤓

De-lidding Core i7-9700k

When researching CPUs, I found about "de-lidding": "opening up" the CPU itself to replace the internal thermal compound with something better. Some Intel CPUs, they say, shipped with shitty stock thermal paste that wasn't very efficient at heat transfer.

Core i7-9700k, however, ships with some sort of a metallic solder. How do I know that? Because I opened it up :-)

De-lidding was a well-established, well-documented, and well-researched procedure for older CPUs. There are even tools like "Delid die-mate" available to make it easier. My CPU was newer, so there was scant evidence, and most of it was recommending against de-lidding "for most users", e.g. this video.

But I still did it, and the worst part, I don't even know if it made any difference because I didn't measure earlier. I have excuses: I was busy with school and I didn't have time to measure anyway ("But you still had time to assemble a new PC, asshole!" says the voice in my head.)

Yes, my benchmark scores are through the roof, but they should've been through the roof to begin with.

Currently, I run my system at the following ("limiter" is why I didn't OC it further):

Component Stock Overclocked Δ Temperature under load Limiter
CPU core clock 4600 MHz 5000 MHz +400 MHz (+8.7%) 71 C temperature: 71C is a bit above the comfortable 60s range
CPU cache clock 4300 MHz 4700 MHz +300 MHz (+7.0%) 71 C I have no idea how to test so used the same speedup as above
GPU core clock 1935 MHz* 2065 MHz +130 MHz (+6.7%) 72 C Atomic Heart demo crashed at +140 (Fire Strike crashed at +150)
GPU memory clock 7000 Mhz 8200 MHz +1200 MHz (+17.0%) 72 C it kept working but I got scared...

* For some reason, the manufacturer website lists base core clock as 1635, but my benchmarking software as 1935. I'd trust the software. By the way, stock temp on my GPU was 66C under load.

I used Specy to measure the temperature, Precision X1 to overclock the GPU, Intel's shitty-looking tool for the CPU. I used Unigine and 3DMark benchmarks to load the GPU, and CPU-Z to load the CPU.

What did I learn?

CPU cooler matters

CPU cooler matters, and matters a lot. I initially thought that I screwed the CPU up by de-lidding. The temperatures under load were up to 92C (!!!) That's too much. Various overclocking forums recommend keeping an overclocked modern Intel CPU in the 60-s.

Replacing the basic cooler with a 240mm liquid cooling solution from Corsair helped.

That being said, I ran that CPU in the 80-85C range for days at a time (while training neural networks), and everything turned out fine. I could've just been lucky though.

Liquid cooling takes up space

Turns out, there's not enough space in my computer case for the 240mm liquid cooling heatsink and the fans. My case is "only" 18 inches tall; I guess I'm up for an unexpected upgrade. My CPU looks like it "vomited" a radiator.

Building PCs is tedious

It was exciting the first time, when I was a 12 year old boy. Today, turning a million screws seems no better than assembling an IKEA drawer set. Getting best performance by tyning overclocking "hyperparameters" and then putting the results in a spreadsheet looks way too much like work or school.

De-lidding in more details

Tinkering with the device in a manner described in this post will definitely void your warranty. I'm just sharing my experience; I'm not describing some manufacturer-recommended procedure. Do not attempt unless you know what you're doing and unless you're prepared to buy another CPU.

Here's what de-lidding looks like. You'll need the tools:

  • der8auer Delid Die Mate 2 to remove the lid.
  • Cardboard knife / razor / small kitchen knife to clean out the solder.
  • Paper towels.
  • really good "liquid metal" thermal compound because if not this, then why bother? Buy at least 2-3, because 1 pack is literally for 1 application, so if you screw it up (and the first time you will) you'll need more.
  • Medical tape or some other non-sticky tape (like a painter's tape) to protect circuitry from liquid metal.
  • A set of very fine sandpaper, 2000-3000 grit. This is to remove the solder and excess liquid metal when you screw up. You can skip this if you're very patient and careful with sharp instruments.
  • Silicone glue that can withstand heat and can glue metal (the CPU heatsink) back to the fiberglass CPU base (some say it's optional tho).
  • It's best if your toenails are due for a trim, as toenails is the best tool for Step 3. (You can also use a non-sharp metal object or a credit card).

Step 1: Plug the untampered CPU in, and measure

Measure your performance first. You will spend hours assembling (took me 2 hours), delidding (4 hours), and then overclocking (5 hours) your PC; spend 20 minutes testing your assembled PC first. Use this:

  • CPU-Z has a cpu load-test.
  • Specy will show you a temperature graph (click on those green squares).
  • Precision X1 will show GPU graphs (other use Afterburner)
  • userbenchmark.com offers some basic tests but I found them to have ~10% variance from test to test.
  • Even the free version of 3DMark will save your results online so you don't have to write them down.

Step 2: Remove the lid!

Take the CPU out, clean up the thermal paste from it (you have it on because you measured first, remember?). Now use the Die-Mate tool as instructed in the manual, or watch it on youtube.

You'll find that the lid covers literally nothing special. Just another black box.

Step 3: Clean out the thermal compound.

No, don't use your toenails yet. That's where you take out your exacto knife or a razor, and slowly and patiently scrub it out. That's where you also understand that this whole thing is a waste of time, and you wish you hadn't done it. Too late.

Once you've removed most of it, use fine (3000 grit or more) sandpaper to clean out the rest.

If your're re-doing your prior work and scrubbing out liquid metal, be very careful! Do step 5 first (add tape), and then wipe it out with a paper towel. Be generous and never swipe the same side of the towel again: liquid metal will go right back.

Step 4: Clean out the silicon glue using your toenails or a non-sharp object

Toenails will just work, and a knife might damage the small metal components under it. Credit cards, or non-sharp metal or sturdy pieces (find some case component you aren't using) come handy too.

Oh, and you can use sandpaper (this time a coarser one, like 300 grit) on the heat spreader.

Step 5: add tape around the die

Liquid metal is conductive so it's extremely important none of it gets into the contact with circuitry (actually it'll be likely fine if you spill just a bit and wipe it carefully, but the tape is essential).

Some people suggest covering the surface with nail polish instead (wat?). I have not tried it as I don't polish my nails typically. ;-P

Step 6: add a little bit of liquid metal

Liquid metal works in tiny amounts (see instruction here) Very slowly and carefully push out about a quarter-millimeter sized ball (use the needle-like tip). Practice on a paper towel first. Use the q-tip that comes in the box to spread it out.

On this picture I added a bit too much on the chip (it worked though) and a bit too little on the heatsink:

Keep adding little bits and spreading. It will sometimes seem that you squeeze it out, and nothing changes, but the added metal just makes the "pool" a tiny bit taller. You don't want it like a "pool" at all: it should be messy a bit, and it should not go "up" at the edges not "reflect" like a mirror.

Step 7: Remove the tape

The tape must come off easily. If you added too much metal, siphon it back into the syringe using the other tip.

Step 8: Add a tiny bit of silicone glue to all edges of the heatsink

Put it on all edges the same way it was before. Use small amount (1mm tall at most), but it's ok since it's not conductive.

Tip: cut the gasket diagonally with scissors so it's easier to spread on a horizontal surface.

Step 9: glue it back using the delid tool

Glue it back. The instructions tell you to be careful as to not damage the CPU. I found I was too careful thefirst time around: it ended up a bit squishy. So the next time I screwed it in until it didn't go any more.

Wait for... idk, I left it overnight. Clean out the excess glue from the outside with a paperclip so it's nice and tidy.

Step 10, optional: photograph the top of the CPU (serial number and stuff)

If you have some liquid metal left and you want to use it between the cooler and the CPU, then you're seeing these numbers at the top of the CPU for the last time. Liquid metal is nasty; it binds to other metal and is impossible to clean. You can sand it off, but you'll be sanding off the top layer of the paint.

Step 11: put it back in, measure the results, and tell me what you got the comments!

Because I have no clue if it all was worth it. Tom's Hardware says it would be, but others don't seem to think so.

Spring Break!

One might notice that the posts in my blog kind of follow the school schedule. Last time I posted around Christmas, and this post was written at the end of March. Well, that’s because I am on a spring break.

I’ve enrolled in a Stanford Graduate education program on Artificial Intelligence through SCPD, Stanford Center for Professional Development. As part of the program, you take the same courses as Stanford’s CS students do and--if you get good grades--you earn a certificate, which is not quite a degree but still something you can frema. You might also keep it rolling until you get a Master’s; many find it hard to resist.

The coolest part of this is that you get to learn from the actual pros who’s advancing the field. I took classes taught by Percy Liang and Christopher Manning. Yesterday we were just reading their papers, and now look, I can see them teach a class! Make sure to check the course catalog if the big names are what you’re after here. Also, some report that in some courses the head instructor doesn’t really appear much anymore, and gives one or two presentations over the video.

It takes a lot of hard work. The assignments take surprisingly long time to complete, 10-20 hours a week; either they are really hard, or I’m a bit slow (or both). But gosh is it worth it. I’m impressed by the quality of American education (this is my first experience at an American university). Teaching Assistants challenge and push you a lot, and thus help you learn.

Add course projects and an occasional midterm here and there, and so you have a second full-time job.

So far I’ve completed two courses:

  • CS224n: Natural Language Understanding (NLP) with Deep learning (RNNs / CNNs, word vectors, sentence parsing);
  • CS221: a course on a wide range of AI topics CS221 featuring Search, Reinforcement Learning, Markov Decision processes, and other basic tools of AI. The “overview” nature of the course is a trap: I still had to pass a midterm that requires deep understanding of all of these.

I now have a divided impression about AI and Machine Learning in general. On one hand, Machine Learning has long history and deep roots in logic and statistics. That’s something you can’t get from the hyped up articles in the media but only through studies. These neural networks and their architectures are actually closely tied with “non-neural” function optimization. On the other hand, there is a lot of stuff I can't quite yet grasp: why do these networks work so well? What are their limits?

I'll get a better feel of it as It’s still not over: I’ve just completed my 2nd course out of 4, and I’ll soon embark on learning about Computer Vision. However, the education and inspiration I received even without the paperwork to boast about is priceless.

If you do choose to enroll, sign up early. The courses fill up the week the enrollment opens – see the dates about "open for course enrollment" in the Academic Calendar, and the quarters in which the courses are "offered" in the course catalog

But if getting degrees is not your thing, many courses are available online, e.g. here's the cs224n lectures

Meanwhile, stay tuned for a couple of posts I've written over the spring break: the completion of the Highway series, and some experiments with hardware,

Traffic III: Why doesn't Changing Lanes Always Work?

Recall in the previous post, the lanes with more traffic flow (with more cars per minute passing a certain point) back up more when there's congestion. This can be summarized by the following picture:

The pressing question from that post was, why not just change lanes to the faster one, as you're approaching the congestion?

Series on Car Traffic modeling

Inspired by my almost daily commute on California Highway 101, I explored the achievements of traffic theory and found answers to the most pressing mysteries of my commute.

Modeling timing in lane changes.

The short answer is: if you already got stuck in a slow lane, and the next lane over is faster, then it's pretty hard to merge into it due to the speed difference.

Many of the traffic models I read about do not consider the distribution of cars across the traffic lanes, so I couldn't find readily available research for multi-lane traffic. Let's speculate about this, just for the fun of it. This is not based on any of the existing models, and I'm sure there are some.

A car can merge if the total time it takes to (a) physically drive to the other lane and (b) accelerate to the speed of the traffic in that lane, is smaller than (c) the time that passes between two cars in the other lane passing the same point.

$T_a$ is actually the time it takes to physically travel the width of the lane w, which is either distance over the horizontal component of the velocity vector (assuming $\alpha$ is the entry angle). Alternatively, if the initial velocity is low, the acceleration component takes over. Don't discount that, traffic lanes are unexpectedly wide (just like the white stripes are unexpectedly long).

$$\begin{align*} T_a =& \frac{w}{v_3\cos\alpha}, T_a' = \sqrt{\frac{2w\cos\alpha'}{a}} \\ T_b =& \frac{v_2 - v_3}{a} \\ T_c =& \frac{1}{q_2} \end{align*}$$

Actually, we should put $T_b$ to somewhere between 0.5x and 1x of what it is, because in a forced merge, the other driver will be decelerating to prevent a collision but we need to factor their reaction time..

We could just leave it here, and say that merging requires the following condition:

$$T_c > T_a + T_b$$

But note that as the lane you're in (lane 3) slows down, $T_b$ becomes larger ($v_3$ decreses), while other parameters stay at the same level. So the lesson from this is change lanes before you're stuck; otherwise it might be too late.

But merges like that stil occur; why?

On a real highway, while $T_a$ and to a large extent $T_b$ (except for the trick described in the side note) are not under the control of the driver, $T_c$ is not actually constant. Temporal short-term fluctuations are sometimes sufficient to merge out of the slow lane, and their variance is larger in slower traffic.

How can a merging driver control $T_b$ in bumper-to-bumper traffic when the lane moves with speed $v_{max}$ which seemingly must equal $v_2$? So it might seem that $T_b > \frac{v_{max}}{a}$. There's a trick. In order to cheat this equation, when the driver in front of me accelerates, I might just wait a bit, leave myself a larger gap, and achieve speed larger than the average $v_{max}$ before starting to merge.

In other words, merging out of a slow lane requires waiting a bit to decrease $T_b$. Note that the gap doesn't have to be large (it's a small multiple of the lane width w), so the cars merging into this gap from an even slower lane shouldn't be a problem.

A sensible assumption would be that the number of cars passing in the other lane, into which we want to merge, within a given unit of time follows a Poisson distribution with $\lambda = q_2$ (where $q_2$ is the flow measure of the other lane). Therefore, the probabiltiy there'll be 0 cars passing within the time it takes to merge $T = T_a + T_b$ can be approximated as $p = q_2T\cdot e^{-q_2T}$. If $q_2 = 40\ \text{cars/min}$, and $T = 2\ \text{sec}$, then p $\approx 0.35$.

It gets more complicated when we consider repeated observations though. The main point is, merging in heavy traffic can be modeled using simple mathematical equations.

"Friction", or modeling the impact of slow lanes onto the fast ones.

The result from the previous section, is that a merge from the slow lane into the fast lane will happen with high probability. What effect does it have on the speed in the "fast" lane?

The impact I observed was that drivers either slow down or merge out of the lane adjacent to a stopped or a slow lane. I think it stems from a subconscious expectation that some driver will take advantage of the algorithm described above. Remember, the merging driver strived to decrease $T_b$.

$$T_b = \frac{v_2-v_3}{a + b}$$

Indeed, if the merging drivers expect that $b \ne 0$, then the drivers in the target lane would naturally try to abvoid decelerating, and avoid being in that lane. I noticed this effect first when driving next to interchanges with congested exits, such as 580 West-80 East or 101 North-SR 109 North.

This naturally maps to a concept of "viscosity", and is thus similar to the effects found in laminar flow.

So in some regards, the traffic flow does look like laminar flow after all.

***

In the next post, I'll share videos of a traffic situation I repeatedly observe on the road and I'll maybe attach a short commute guide for the Silicon Valley commuter.

Traffic II: Why does the Fast Lane become Slowest in Traffic?

As I drive up to San Francisco from one of the freeways (either 101 or 280), I'm approaching the inevitable traffic stoppage. The freeway is clogged and it seems I'm up for a 15 minutes in bumper-to-bumper traffic, wishing I had a self-driving car.

But wait, what is it? Why am I stopped and everyone else keeps going? Could it be just another example of the Why is the Other Lane Always Faster? illusion?

Series on Car Traffic modeling

Inspired by my almost daily commute on California Highway 101, I explored the achievements of traffic theory and found answers to the most pressing mysteries of my commute.

Casual, repeated observations confidently refuted this. The traffic always gets stuck in exactly the same manner all the time. The leftmost lane's blockage starts farthest; then the second-to-the-left lane, and the rightmost lanes are always fastest. How come?

The conclusion it seems, does not require deep understanding of traffic theory: more cars arrive through fast lanes than through the slow lanes; more cars means more congestion. But traffic theory can help put mathematical notation around it so read on.

Fundamental properties of traffic: Flow, Density, Speed

My observations suggested that a model of traffic that can explain this phenomenon requires to detach the speed of the traffic from its other propertie such as amount of cars per square unit of higheway. I later learned that traffic theory has been exploring these questions since like 1930s (here's an overview of classical traffic flow models on Wikipedia), so I'll put my observations in these accepted terms.

If you observe a section of a lane of the freeway for some time, as enough cars pass, you'll notice that traffic has some numerical properties.

  1. The amount of cars passing through a certain point per unit of time, what traffic engineers call flow. Let's measure it in cars per minute, and name it q.

  2. The average speed with which cars move, or v.

  3. The average amount of cars that are simultaneously within the segment boundaries at any given point of time, referred to as density, or k.

We'll talk more about these in the section on traffic models, but for now we'll just use them to discuss the question of the day.

Why does the fastest lane have the longest congestion?

Let's assume a typical highway in the San Francisco Bay Area that's moving cars without congestion:

The speed limit on Californian freeways is 65 mph (~105 km/h). So when slow trucks drive in the slow right lane, few other cars want to share the lane with them. That means that the slow lane has disproportionately small flow.

So while I didn't conduct scientific research on these values of q, they seem entirely explainable within the traffic model and agree with casual observation.

We know that the Fast Lane (the left lane)'s speed will be higher. However, it is also true [in California] that the Fast Lane will have higher flow.

It also makes sense that a slightly faster lane will move slightly more traffic (= will have larger flow). The disproportionally smaller flow in the slow lane is a result of a speed limit effect (see sidenote).

Now let's assume congestion develops at a certain point in the road (because the road narrows, or even spontaneously).

How many cars will get stuck in each lane (assuming no lane changes occur)? If n-th lane's flow is $q$, then the number of cars passing through would be their product $q\cdot T$. If the incoming flow is $q_n$ and outgoing $q_0$, then the number of cars that enter the zone but do not get through the blockage per unit of time $T$ would equal to:

$$(q_0 - q_n)\cdot T$$

It seems reasonable that lanes with the highest traffic flow will accummulate the most cars:

Indeed, more cars in the high-flow lane are entering the road before the congestions per unit of times than in the flow lanes. Therefore, the higher-flow lane will accummulate more cars.

Note that "high-flow" does not necessarily mean "faster". Traffic flow theory and practice establish that the highest flow is attained at a certain speed, and higher speeds as well as lower speeds lead to the decrease in flow. That makes sense: drivers would increase the distance between cars as they drive faster. Link to the theory below.

Admittedly, this simple models ignores most of the long-term effects of the traffic. However, it does illustrate what I observe pretty much daily: Lanes with highest flow tend to develop longer congested segments than the lanes with lowest flow.

So a lesson from this could be: when you see congestion ahead, merge into the slow lane.

Another lesson from this: it's sometimes beneficial to drive in a lane that has a high-flow exit. Consider the following situation:

Here, the congestion in the first lane will be twice as small as in the other lanes, because the exit "unloaded" it.

This model stops scaling though

The effects described above will likely disintegrate after several minutes, thanks to lane changes. The distribution of the speeds, flow, and concentration will "diffuse" from the slowest lane into other lanes. I'll devote a separate post to lane changes.

Other results of Traffic Modeling

Researchers have been studying traffic and the properties of traffic for as long as there were cars on the city roads. Basically, the models I read about focused on two areas:

  1. "Car follower models" that infer macroscopic traffic flow properties from the behavior of individual driver decision-making.

  2. "Traffic flow models" that study macroscopic traffic flow properties directly, and infer relationships between them.

Note that classical fluid dynamics models (of the kind that study the flow of water in the pipes) are not applicable to traffic flow. Despite that fluid dynamics studies similar properties "such as flow velocity, pressure, density, and temperature, as functions of space and time", cars and molecules of the fluid behave differently. Most notably, cars don't normally push one anther as they collide, so things like Bernoulli principle do not apply, and while liquid in a pipe under pressure accelerates at the bottleneck, car traffic decelerates.

Car follower models

Car follower models basically model behaviors of individial cars (how the drivers accelerate, break, change lanes, and generally "follow" one another"). For example, there's Gipps model and Newell's model. The diagrams like this show individual car tracks:

(Image from the "Traffic flow theory and modelling" chapter by Serge Hoogendoorn and Victor Knoop.)

To illustreate the point I made above, notice how this model simply has "overtaking" as if it has no other effects than a car passing a different car. However, on a congested freeway, there needs to also be space in the other lane for the car to move into it, and subsequently and optionally back into the original lane. So this particular model does not intend to describe lane changes (which is OK; it can have other uses).

However, some other models do. In fact, a system called TRANSIMS mentions that it models behavior on a freeway as a network of agents trying to maximize their uitility, and finds a Nash Equilibrium, which becomes the solution for the steady flow of traffic.

Traffic flow models

Early traffic flow models mostly focused on establishing the relationship between speed, density, and flow. For example, the following diagram could be used to predict, at what speed will the highway reach maximum capacity:

It's reported that early traffic models did not explain spontaneous congestion on freeways (when trere's traffic without any apparent reason). I guess all it took was for freeways to become spontaneously congested in the areas where the researchers worked. :-)

The discovery of spontaneous traffic breakdown by various people followed (in late 80s-late 90s), and the following law called "Fundamental diagram of traffic flow" was established: as the upstream traffic flow increases, speed downstream increases until it reaches the breaking point, at which both speed and the downstream flow start decreasing with continued increase of the upstream flow. It can be depicted on a neat three-dimensional diagram:

The diagram is borrowed from the "Traffic Stream Characteristics" by Fred L. Hall (pdf here).

My personal takeaway from this

While I'm not a traffic engineer, I set out to try to play with traffic simulations and try to see how I can model my daily commute. While revieweing the literature, it turned out that the field has amassed ample knowledge about highway traffic already, and there are existing open-source simulators (like TRANSIMS) that probably already do it better.

For example, the model I wanted to develop would be very similar to the "Cellular automata model" described on slide 24 in the slide deck by Benjamin Seibold (Temple University); known as Nagel-Steckenberg model.

However, I still wasn't able to find any mention of traffic lane change models. They probably exist; please let me know if you find them!

So I think I'll pull a plug on the simulation and move on to other things. However, I will muse about the lane change modeling and dynamics a bit in the next post, and also tell the story of the 101-92 interchange here.

Traffic I: Why is the Other Lane Always Faster?

"This highway has four lanes. One of them is the fastest, and there's a 75% chance we're not in it."

Last year, I switched from the Google office in San Francisco, next to where I live, to the Mountain View office. In search of more interesting projects and better career prospects, I repeated the move towards the company headquarters many engineers make. But it comes with a price. The 101.

The commute takes anywhere between 45 minutes and 2 hours one way. The spread is not a force of nature; it is completely explained by the variations in traffic.

Series on Car Traffic modeling

Inspired by my almost daily commute on California Highway 101, I explored the achievements of traffic theory and found answers to the most pressing mysteries of my commute.

The more time I spent on the road, the more I noticed that the traffic follows some predicatable patterns. I noticed the patterns but they seemed counterintuitive. For example:

  1. Why does the left lane travels faster when the road is clear but seems to get stuck more when there's traffic? (here's why)
  2. Why does the 92-101 southbound interchange always gets stuck but the road is always free after that? (here's why)
  3. And finally, why are other lanes always faster?

Over the next several posts, I plan to explore these questions and maybe build some sort of a traffic simulator. But let's begin?

"Why are other lanes always faster?"

Once I was late for a flight, and a colleague offered to drive me so I wouldn't have to spend time on parking. It was a Thursday afternoon so the Bay Area traffic was in a predictable standstill. Luckily, my colleague was an amateur race car driver, so we gave it a shot.

Racing expertise didn't seem to help. Some skills were helpful, like merging at will into a lane that has "no space" between the cars. However, we got predictably stuck with the rest of the drivers. We started talking about traffic, and why did we get stuck in the slowest lane.

I brought up a book on traffic that I listened to before. "Traffic: Why We Drive the Way We Do (and What It Says About Us) by Tom Vanderbilt is a very fitting enterntainment for someone stuck in their car (I listened to it on Amzaon's Audible). Among other mysteries of traffic, the book explored the paradox of the slowest lane, in the very first chapter.

So why do other lanes seem faster? The book posits that it's an illusion: (the other lanes are just as slow), and offers the following explanation:

  1. "Unoccupied waiting" seems longer than it actually is.
  2. Humans hate when others "get ahead of them".
  3. We're naturally more aware of things that move than of the things that don't, so we don't notice the other lane when it's slow.

Having heard all that, my colleague offered a simpler explanation.

"This highway has four lanes. One of them is the fastest, and there's a 75% chance we're not in it."

Let's make a model!

And I think my friend was more accurate here. Traffic lanes do differ in the time it takes to travel them. I kept noticing that when driving down the highway in the left lane and as the traffic "bunched up" way further back than the "slower" right lanes!

That's why I want to come up with a mathematical model that explains my own commute experience. Here, I will not take on modelling traffic in a big, densely interconnected city, but focus on something simpler. I'll try to model the traffic on one single long highway (just like the 101), and see where it takes me.

Of course there will be lanes, because explaining the dissimilarity between the flow of traffic in different lanes is the whole goal of this.

Stay tuned.

I tricked the Amazon Go store

I tricked the Amazon Go store.

It's not hard. You need an accomplice. The accomplice swaps two items on the shelf. You take one of the misplaced items. You get charged for the other. But that's not the point

Mechanical Turk?

Two days ago, I was on a trip to Seattle, and of course I visited the Amazon Go Store. If you are out of the loop, it's a store without checkout. You come into a store, you grab items from the shelves, you walk out. That's it.

Amazon doesn't explain how it works, but we can infer some from observations.

  1. When you walk out, you don't get a receipt instantly;
  2. The app sends you a receipt later;
  3. The time it takes their servers to present you a receipt varies. We had three people enter the store; the person who didn't spend much time got his receipt in 2-3 minutes, the accomplice in ~5 minutes, and me, it took Amazon the whopping 15-20 minutes to serve my receipt.

We can conclude that tricky interactions get sent for a human review, e.g. to Mechanical Turk, which Amazon conveniently owns. It seems that a bunch of object recognition coupled with a bit of mechanical-turking does the trick.

But it is the future

Once I've satisfied my curiosity, and managed to trick the store, I returned to use it for real.

I walked in, grabbed a bottle of water, and walked out. It took 22 seconds. I got a receipt for a bottle of water later, but I didn't even check.

Folks, this is the future.

In his article "Invisible Asymptotes", Eugene Wei attributes a lot of Amazon's achievement in winning retail consumers hearts to eliminating friction. He writes,

People hate paying for shipping. They despise it. It may sound banal, even self-evident, but understanding that was, I'm convinced, so critical to much of how we unlocked growth at Amazon over the years.

Interestingly, Eugene doesn't apply this to Amazon Go, but that's probably one visit to Seattle away. ;-) Waiting in checkout lines is the worst part of brick-and-mortar shopping experience; it's obvious to everyone who shopped at least once.

Therefore, Amazon Go is the future.

By the way, does anyone need a bottle of salad dressing?