Plato asks Socrates, “Teacher, can you tell me what is love?”
The teacher said, “Young man, I certainly can. But first, go to the wheat field, pick the most golden grain of wheat and come back; you can go through them only once and cannot turn back to pick.”
Plato went to the field and find a nice grain of wheat. But he wonders: maybe there is a better one later. Then he saw another one… but maybe there’s an even more perfect one waiting for him.
Later, when he reached the end of the field, he start to realize that he has missed the most golden grain of wheat. He regretted, and returned to Socrates with empty hands.
“My son, this is love… you keep looking for a perfect match, but you later realize that you have already miss the person.”
Plato asks again, “Teacher, can you tell me what is marriage?”
And Socrates once again said, “Young man, I certainly can. But first, go to the wheat field, pick the most golden grain of wheat and come back; you can go through them only once and cannot turn back to pick.”
Plato went back to the wheat field. Careful not to repeat the previous mistake, he picked an acceptable grain, and returned to the teacher.
“This time you brought back a wheat. You look for one that is just nice, and you believe this is the best one you get… this is marriage.”
Thus spoke the teacher of yore. Walking to the middle of the field and pick a random corn, however, is not the optimal solution. The chances of finding the best corn is 1%. I decided to explore the solutions in a Monte Carlo fashion, and the following diagram** will help explain the approach:
In a Monte Carlo method, we create a hypothetical “date pool”. The circles above each represent a date, and their “goodness” is represented by both their size and color. (Bigger, brighter = better.) We then pick our date using some strategy and evaluate its outcome. This single outcome in isolation is meaningless – Ms. Q, our princess-in-question, may have just been unlucky – so we repeat this experiment several millions of times to aggregate sufficient statistics. Simple problems like this converges to single probabilities.
What are the findings? Well, for that, you’ll have to wait for Part 2! In the meantime, if you want to voice your objections and opinions, I am all ears.
** Created with Nodebox, which generates images computationally with python syntax. It’s super fun to play with. The code follows:
import random colors = ximport("colors") def rndcolor(): return color(random.random(), random.random(), 0.5, 0.9) size =  for i in range(100): current = random.gauss(30, 7) size.append(current) for i in range(10): for j in range(10): padding = 50 scaling = 30 index = (i*10)+j clrl = (size[index]/110, size[index]/70, size[index]/50, 0.9) colors.shadow(alpha=0.08) fill(clrl) oval((j*padding)+scaling, (i*padding)+scaling, size[index], size[index]) fill(0.3) font("Helvetica", 11) text(index+1, (j*padding)+scaling-5, (i*padding)+scaling-5)