As unlikely as it seems, the "match" the heading refers to is not a regular expression match. While I am in a long-term but open relationship with computers, I also do pay a certain bit of attention to opposite sex subjects of my species. This involves two immediate conclusions:

  • I have a shortage of such subjects available for dating in my nearest social circle;
  • The more information technology-abundant the process of mating is the more I'm likely to succeed.

Which, in turn, makes it obvious that I like to visit dating sites. If your first guess was that I use a geek-oriented fully-automatic neural-bayessian-expert-system-network-powered social site, you would be wrong. Automated matching engines appeared a bad solution: the criteria they use are too generic; besides they're not as popular as those that target generic audience. Therefore, the only option I have is to look for the one at generic-purpose dating sites.

The problem with generic-purpose dating sites

These sites (I tried several) share a common issue: they are annoyingly inefficient.

What's the source of the inefficiency there? It's the way search results are presented and managed. Or, to be more specific, how they're unmanageable by design. When you try to look for a woman to date, you're presented with a search result page, which is sorted by a complex weighted formula. A large weight is assigned to the last visit date, however, an even larger weight is of a paid (or, sometimes, free) service to move yourself up in search results page.

No wonder that a lot of women you have already considered as a potential mate and have rejected keep showing up in search result, distracting your attention from those potentially dateable. This decreases the efficiency dramatically.

I already tried to apply a "geek-like" approach to another "real-life" problem I head, an overweight. The solution I found appeared to be quite efficient: the graph of my weight is still the biggest motivator for making myself less fat.

A solution

A solution to this would be to filter search results on server side, discarding girls a user has already seen. However, aside from the apparent marketing-related problems—how would you sell good places in search results if they're so easy to hide permanently?—user-specific search yield should suffer from performance impediments, especially on popular sites.

Therefore, a filter on the user side could solve the problem. And the web browsers (at lease, some of them), do already have such mechanism. User scripts.

I used Firefox and GreaseMonkey to make a simple script to improve the girl finding experience. Here it is:

This script sets up an event listener to the ENTER key. When you press it, it locates the parts of the web page that look like girl userinfo or like search results. If it finds the search result that has already been seen, it shades it, making the other girls more visible. If the script finds that I'm currently at the girl-specific user info page with a lot of details, it will strike out the user next time, because a girl I closely looked and rejected should be even less visible.

The script uses GM_setValue and GM_getValue special functions. Greasemonkey Firefox addon specifically provides them to access a permanent storage that persists across browser invocations.

Here goes the rant

The script, however, looks a bit more monstrous than it should, and there is indeed a number of issues for this. I found a nice JavaScript reference; it was not as unhelpful as many other ones, but some issues left unresolved anyway. Let's enumerate them.

Ajax loading

The simplest architecture for the script, of course, would be just to fix up a page after it's loaded. Why do we need a lot of creepy event listeners?

The key here is after it's loaded. We can't tell when the page, or a piece of its content is loaded when running a userscript. The dating site I targeted was a complex piece of software with a lot of frames, with ajax-powered search result loading, etc. It heavily suffered from the issue described in this StackOverflow question, and the solution presented there with onhashchange event didn't help.

Therefore, I had to retreat to the User Brain Slowness. A user should hit ENTER after the page is loaded, making the script parse everything correctly. However, this worked badly too, hence all these event handlers that seem excessive at first sight.

XML traversal

Okay, so assume we have successfully loaded the document HTML tree. How can I now traverse it, spending as few keystrokes as possible?

As far as I know, there's no way to do this. All ways are browser-specific, and do not look even closely as a foreach loop. You may see these ugly constructs back from 90s near the FIRST_ORDERED_NODE_TYPE constants in the code above.

***

Having tried this, I really do not understand why JavaScript-based interfaces are becoming industry standard. Perhaps, I know JavaScript too badly to see its full power (this is my first JS program). Or, perhaps, programmers that prefer and are capable to make convenient languages merely steered clear from web-programming, and preferred to lock themselves up in ivory towers...

The only conclusion I'm sure about is that, the night I wrote the script, programming successfully distracted me from women for several hours again. What a jealous activity it is!