kosseferal has a wonderful entry today about hurt heart. It reminded me of when I was very little and my sister (18 months younger) was crying and crying and crying and it didn't look to me like anything was hurt. I asked my mother why she was crying and she said "she got her feelings hurt". I spent days looking for my feelings to see if they were ok.

My friends Kieran and Alan are back from Paris. They have promised me all of the details soon but this morning Kieran's brother - who is also my boss - waltzes in with a beautiful new sweater. It's really lovely and Stephen who is wonderful but tight as a tick and would never actually buy such an item, is proud as a peacock.

Here are some interesting things about Google - from a talk given by a project manager. They are on the human factors end of Googling.

And this reminds me of the time I worked for IBM in Charlotte where we made automated teller machines. The engineers were working on one that dispensed coins. They needed people to test the coin release cup. They recruited me because I had the longest fingernails. At the time I was proud. Thinking back on it now kind of reminds me of trailer trash.

Enough of this, my publish operation is getting to the end and I need to get back to it.
(no subject)
Date: 2005-01-18 05:12 am (UTC)(no subject)
Date: 2005-01-18 05:57 am (UTC)Typically what happens is when a feature's ready for launch, we'll do a test of a tiny percentage (depending on the riskiness of a feature) of the online population. Initially, tests were totally random, but we also added the ability to separate a set of randomly selected users and follow them over long periods of time. (In aggregate form, of course. Contrary to what the conspiracy theorists suggest, we were interested in big blobs of similar users)
For example, when we launched my product, we did a 5% test. The two things we're looking for are (a) does it crash the site and (b) does it obviously and adversely affect revenue. Either one was grounds for shutting it down immediately. After my feature ran for several days, we amassed enough data to determine its overall behavior and on which areas of the site. I'd have the option of retuning the feature or ramping it up to a larger population.
*Everything* is tested, including seemingly stupid shit like bolding the punctuation following clauses whose formatting was also bold font. (I forget the results of that one, but it did have a measurable difference.)
[We were never able to determine if eating generic cereal affected specimens mailed into difficult to reach customer support. (-:]