WORDSEARCH Entry Summaries

The attached mail is the collection of write-ups I received. As usual they are mostly unedited (only personal POTM-master insults were removed). Write-ups are provided in order of finish (more or less).
     Genetic_grouper  SCORE=1215 (230)  Jaco Cronje
Genetic_grouper submitted by Jaco Cronje at s9805027-at-student.up.ac.za
++ Please summarize the algorithm used by Genetic_grouper

The basic idea of the  algorithm works as follow :

Genetic :

1. Set the best solution score to 0
2. Get a random solution
3. Combine the random solution with the current best solution
4. If the result are better than the current best solution,
	make the result the best solution
5. Goto step 2, repeat this until the time is up

I changed the above method a little by adding 8 "best" solution
keepers. Almost like a genetic algorithm. The 8 solutions in
memory are the population.
Step 3, also changed. I combined 2 of the 8 solutions to get a solution
and then combine a random solution with the best of the original 2 solutions.

In every third generation I threw in the current best solution at a
random gene. Just to make sure that the best solution is in the population.

Grouper :

When combining 2 solutions to get a better one, I divide the words in
the 2 solutions into groups. 2 words are in the same group if you could
walk from the 1 word to the other, along words that are connected.
a Fact is that there can be no more than 2 words that cross each other
on a single cell. So, it was easy to figure out which word cross which words.

If a group were <= 30 words, I find the best solution for the words by doing
an optimized brute force attack.
If a group were > 30 words, I chose a random word from the group and hope
that the group will shrink <= 30 words.

During all this I removed some words that should be in the solution :
Output a word if the length of the word >= sum of all the words that cross
that word.
This helped to reduce the size of the groups.
There are a number of methods like the one just mentioned that can
be used to reduce the size of the groups and output the correct words.
But I didn't have time to implement them in my program.

++ How much did you test?
A lot I think. I just didn't test very long words.
I had about 20 programs that I wrote, so each time I test all
the programs on a testcase to see which one was the best.
But when I discovered Genetic_grouper, it was vrey clear that
it will be my best entry. Greedy_dummy wasn't very good at all

++ If I had it to do all over - here's what I would try ....
Implement more tricks on getting a group smaller.
Make it faster, so that I could use a large population

Live :  South - Africa, in Bapsfontein near Pretoria in Gauteng Province

Fun : Write games, program algortihms, do POTM, Hunting , Fishing

Work : I Just Finished my 2nd year at the University of Pretoria.
I am studing Computer Science, and going to my 3rd year in 2000.
I'm 20 years old.
I took part in IOI98 (International Olimpiad in informatics) which was
held in Portugal.
My team won the Southern-Africa ACM regional programming contest in
1999. So we are going for the World Finals of the ACM programming contest
in March 2000, which will be held in Florida, Orlando.
I would love to have some contacts overseas.

Secrets: Access Denied.

          Tessellate  SCORE=1182 (180)  Brian Raiter
Tessellate submitted by Brian Raiter at breadbox-at-muppetlabs.com
++ Please summarize the algorithm used by Tessellate

I had very little free time in which to work on a submission, so I
decided to see how much mileage I could get from a very simple
approach. The program stores the dictionary as a tree, and then
produces a list of every instance of every word in the grid. It then
sorts the list, and tries to use each word from the list in order,
with no backtracking.

The list is actually sorted and used twice. The first time it is
simply sorted by size, thus using longer words first and squeezing the
smaller words in later. The second time the list is sorted by size
squared over "frequency", where the frequency is calculated as the sum
of the number of words in the list that include that letter in the
grid -- thus preferring the grid positions that are "hardest" to
use. The program then outputs whichever one produced the best score.

The list is created in memory, but with the actual words stored in a
temporary file. If memory or disk space runs out while the list is
being created, the program proceeds with the list created so far. If
time permits after the program is done with the list, it starts over
from the beginning, with the used letters in the grid removed.

My rather forlorn hope is that most of the other programs will bog
down or otherwise misbehave from combinatorial explosion in the real

++ How much did you test?

Quite a lot. I wrote a Perl script to produce inputs with various
sizes of grids, wordlists and alphabets, and another script to verify
the correctness of the output. I also tried a dozen or so comparisons
for sorting the lists before settling on the two that Tessellate uses.

++ If I had it to do all over - here's what I would try ....

Arrange to not work during the three months so that I would have time
to try some actual heuristics. I suspect that a simple, fast genetic
algorithm could probably do very well.


I live in Seattle. I program for a living and hack for fun.

  SeekAndYeShallFind  SCORE=1146 (159)  Randy Saint
SeekAndYeShallFind submitted by Randy Saint at randy.saint-at-usa.net
++ Please summarize the algorithm used by SeekAndYeShallFind
This program first obtains a greedy solution, with longest words first.
Then, it calculates which words have the most unused letters surrounding them.
 The words are sorted and the worst are removed from the bottom of the list.
Then it does an exhaustive depth-first search to add words back to the 
solution set.  If the solution is not improved, then it increses the 
number of words to remove from the bottom of the list.

++ How much did you test?
My first submission attempted to generate all possible positions for each
 word on the list.  This works fine for normal words, but breaks down 
on words with many repeated letters because it ends up generating 
endless combinations of rearranging the letters in one spot. 
So my new method only gets positions on an as-needed basis.  
It finds a position and places the word.  Then tries to
place another word.  This worked to generate at least some 
solution for every test case, although it never really came close to 
the solutions that others were claiming on the message board.  
For test cases, I just used the example
test inputs posted on the message boards.  Thanks everyone!

++ If I had it to do all over - here's what I would try ....
I would make my Depth-first search more efficient by including iterative
deepening or changing it to some other algorithm for an exhaustive 
search, or near exhaustive.  Once it got to removing 7 or more 
words it would take forever to finish the exhaustive search.

++ WHERE DO YOU LIVE?  Houston, TX
DO FOR FUN?  The obvious: enter programming contests.
DO FOR WORK?  I manage a team of software developers.  
	We develop software used to plan Space Shuttle and 
	International Space Station missions.
INNERMOST SECRET?  I am very happy with my job.  
	I get to program (a little), boss people around (official 
	title is "Unit Lead"), and I'm working to help
	the manned exploration of space. 

              Hitman  SCORE=1146 (230)  Michael Strauch
Hitman submitted by Michael Strauch at michael.strauch-at-cityweb.de
++ Please summarize the algorithm used by Hitman

Hitman tries to find a compromise between pre-calculating as many paths as
possible to place them afterwards into the grid, and
searching for new paths when the grid is not empty any more.

The first stage of the algorithm does a precalculation of paths. For each
word, it starts a recursive search to find a lot of paths for this word. To
prevent running out of time, it restricts the search to the following

- using a branch-and-cut-technique by pre-scanning if the recursive search
  would succeed when letters could be re-used (this can be done very quick
  with a breadth-first search)
- once a path for a word is found, do not reuse grid cells for other paths of
  the same word (but reuse them for other words)
- restrict the recursion to 1000 recursive calls per word and starting positions
- restrict the number of paths to 50000 in this stage

The second stage tries to choose a subset of none-overlapping paths from the
set of all available paths with a big score, where the score is
   * 10000 + (9999-)

This stage uses a "threshold accepting" optimizer (a simplification of 
the better known "simulated annealing"). The general idea (when you 
look for the biggest score) is:

  1. Choose a starting configuration C and a positive threshold T.
  2. Modify C randomly a little bit, giving a new configuration C'.
  3. If Score(C') > Score(C) - T, set C:=C'.
  4. If there is no score improvement "for a long time", decrease T
  5. If T>0, goto step 2.

A "configuration" is just a list of none-overlapping paths. A
modification to this configuration is done by the following steps:
  - randomly choose a new path that is not already part of the configuration
  - delete any other path from the configuration that overlaps the new one
  - insert the new path
  - run a greedy algorithm to add previously calculated paths that do not
    overlap the ones that are now part of the configuration
  - From time to time: run a recursive search trying to find new paths
    that might fit into the grid now (non-overlapping)

Hitman runs this algorithm as long as there is time left. To use the time
completely, T will not be decreased below 0 and the cycle does not end until
time is up.

Some other things that may be of interest for you:
- Hitman does not add paths to the global list of precalculated paths 
	when there are already paths sharing exactly the same grid cells.
- The program makes use of a lot of lookup-tables to speed up the search.

++ If I had it to do all over - here's what I would try ....

I would do a lot of more fine-tuning between pre-calculation, re-using these
precalculated results and searching for new paths during the second phase.

When I started, I tried to precalculate lots of more different paths with
less restrictions, and keep the set of paths unchanged during the
optimization phase. This gave me better results for a lot of test cases, but
Hitman was not able to handle Frederic van der Plancke's input data with
this approach. Then I re-wrote big parts of my algorithm to allow finding of
new paths during the second stage, but this slowed things down too much. 

At the end, I decided to add new paths during the second stage only from
time to time. Since I was running out of time and new ideas, I decided to
live with the fact, that my algorithm does not find the best possible
solutions in each case, but it should be able to handle most "hard" cases
and I hope that it produces not too bad results.


- Germany. POTM. Software Engineering. 
  Well, it wouldn't be a secret any more ;-)

           Eccl_6_11  SCORE=1141 (172)  Phil Sallee
Eccl_6_11 submitted by Phil Sallee at sallee-at-cs.ucdavis.edu
++ Please summarize the algorithm used by Eccl_6_11

The first stage of the algorithm is to find all of the legal word paths in
the grid.  It does this by first hashing the single character and adjacent
double character sequences in the grid.  Then it finds legal matches by
chaining the two character sequences and matching connecting sequences
together in a depth first search fashion.

In the second stage, a set of non-conflicting paths are chosen in a way that
tries to maximize the character/word score.  The method I chose was an
iterative approximation algorithm which assigns a score to every word path
found.  This score is initialized to the length of the path, and in each
pass the score for a word is updated based on the other paths that it
conflicts with.  The following update rule was found through trial and error
and seems to work the best:


Those paths with the highest score are taken with highest precedence, and
any conflicting paths with lower scores are not used in the solution.  In
this way, the algorithm starts with the greedy solution (longest paths win)
and with each pass tries to improve the solution by taking into account
paths that conflict with other paths which are of high value.  At each pass,
the solution obtained so far is computed.  Paths which are assigned a value
below a minimum value threshold are discarded and paths which have no more
conflicts are permanently selected.  As long as the solutions obtained are
improving, the algorithm continues until time expires or there are no more
paths left to consider.  However it was found that with this update rule
alone, the optimum solution can not always be reached and that solutions do
not always converge but may diverge with increasingly worse solutions.
Other update rules did not seem to prevent this.  To get around this
problem, the algorithm first of all does not continue with a worse solution.
Instead it shakes up the solution a little by reducing all the scores in a
way proportional to their lengths again with the following additional update


Then it continues with the previous update rule until the solution again
starts to diverge.  Without this adjustment the algorithm could not find the
maximum score on the system test problem, but with only one or two
adjustments it could find the maximum score in just a few iterations.  This
method proved to be very fast at finding good approximate solutions on large
test grids with only a few iterations.  I had to play around with a number
of parameters, such as the minimum value threshold, how long to attempt
phase 1 before going to phase 2 or a maximum number of paths to find if
there are too many, how to determine if the algorithm is stuck and the
solution cannot improve, etc.

++ How much did you test?

Not nearly enough.  I used a few test problems posted to the web site and a
few of my own.  I found the system test problem to be the most help when I
was developing the algorithm.  I wanted to set up a large test suite
containing a lot of test problems for which I knew the optimal score so that
I could better determine how parameter changes affected the score but ran
out of time to do the kind of testing I wanted to.  I had one or two tests
that were large, or which generated too many paths to consider as well as a
few small ones that had known optimal solutions which I could not yet

++ If I had it to do all over - here's what I would try ....

If I had time I would have modified the first phase to look for letters with
low frequencies and anchor the search for paths on those grid points.  As it
chains together matching letter positions it could look ahead to letters or
combinations with low frequency and determine by simple calculation whether
it was going to be able to reach a necessary character.  This would make it
possible for the algorithm to handle the "one word" example posted to the
web site, and other grids which only have a few possible paths, but a lot of
some character repeated in the grid.  This causes an explosion in partial
word paths which my current algorithm can't deal with very well.  For the
second phase of the algorithm I would try adding a second method that was
better at finding optimal solutions for smaller problems, creating more of a
hybrid approach.  It seems like a different algorithm is needed for finding
the absolute optimal solution on a small enough problem than for finding
really good solutions on large problems.  I would experiment with combining
two or more methods using the first to reduce the problem for the second
method or select which method to use based on the input size.  And I would
test a lot more, using a much more comprehensive test suite.


I live in Davis, California.  I am enrolled in the Computer Science
doctorate program at UC Davis, and am currently in my second year.  Before
attending UC Davis, I worked for 5 years in Quality Assurance at a software
company.  I am married with two young children which occupy much of my time.
I also enjoy playing the saxophone, internet chess and juggling when I have
any free time.  I have no innermost secrets.  ;-)   I have watched previous
POTM contests but the is the first time I have made the time to enter.  I
have really enjoyed it and only wish I could have spent more time perfecting
my algorithm.

      scrabblebabble  SCORE=1132 (251)  Barney Ross
	Sorry ... No questionairre was returned for scrabblebabble

 word_finder_general  SCORE=1128 (158)  Joachim Smith
word_finder_general submitted by Joachim Smith at smithj-at-genrad.com
++ Please summarize the algorithm used by word_finder_general

Sort word list by length (to try to 'pack' the small ones around the large

For each word, find an occurence of the first letter then try adjacent
letters for the rest of the word. If at any point, the next letter is not
found, 'undo' the word, letter by letter, and try any remaining adjacent
letters. If the complete word is found, print out the grid locations and
find the next start letter for that word. If the word is not able to be
found from any start letter, 'undo' the attempt and find the next start
letter. When there are no remaining grid letters to try, carry on to the
next word.

At any point, letters used (for complete words and in the middle of trying
to find a match) are marked as 'used' to avoid double-usage. If a word is
found, leave those letters used, if a word is not found, un-mark those

++ How much did you test?

Quite a bit using a lot of small grids with special 'tricky' conditions
('blind alleys', multiples, long tangled words, etc) were used, along with
the posted test grids.

++ If I had it to do all over - here's what I would try ....

Time permitting, I'd implement a scoring system for all possible found words
taking into account crosses where a letter is used in more than one word,
before optimising the final list to pick those words which give the highest
overall score.


Stockport, England.

Ride motorbikes.

Engineer software.

Hmmmmm I'll have to think about that one!

         snakes_nest  SCORE=1119 (278)  Frederic vanderPlancke
snakes_nest submitted by Frederic vanderPlancke 
at fvdp-at-decis.be / frederic.vdplancke-at-writeme.com
++ Please summarize the algorithm used by snakes_nest

[top level]
Genetic algorithm. Brings some improvement on purely random algorithm, but not
that much.

population size (at most) 50, at each time only 30 serve as parents for next
generation: offspring is made of 50% of crossovers, 20% of transpositions, 30%
of mutations of parents;
- crossover of solutions A and B: 
    reads words from A and B alternatively, keeping only those words that do
    not overlap with previous words;
    then (try to) find maximal extension of the result.
- mutation of solution A:
    forbid 3 cells, and keep only the words in A that do not overlap the cells.
    then (try to) find maximal extension of the result.
- transposition of solution A:
    reverses a random sublist of the list of words.
- finding maximal extension of partial solution A:
    chose a cell randomly; randomized depth-first-search for words through this
    cell that do not meet other words, until exhausted or reached intermediate
    timeout; choose one of these words randomly (this was a trick to avoid
    chosing only maximal paths). Repeat the whole process until we're sure no
    other word fits, or until 10 seconds were spent.

The best solution is memorized and returned (it's not automatically included
in the next generation.)

My first algo, "purely random", was: repeatedly find maximal extension of the
empty solution, and choose the best solution found.
I began to implement the genetic algo on the last week before the finals, and I
had not much time to experiment with it...

[words searching]
It relies on a tree of all possible words, stored as (tail, reverse head) e.g.
"HELLO" yields word-paths "HELLO", "LLO:EH", "LO:LEH", "O:LLEH" (':' means
"back to start point".)
Care must been taken that the size of this tree does not exceeds Fred's virtual

for 'MAYBE', 'MAYBENOT', 'MAYFLOWER' the tree would look like this (in informal
("", non_terminal,
 'M' -> ("MAY", non_terminal,
         'B' -> ("BE", terminal,
                 'N' -> ("NOT", terminal)),
         'F' -> ("FLOWER", terminal)))

++ How much did you test?

test ? why test ? ;-)))

(1) I'm not sure I tested enough, specially the genetic algo, but I did test
it on all "big" examples that have been provided on/via the message board. 
Previous versions have been tested against all message board examples.
(2) There's more than simply testing here. 
I computed an upper bound to the space spent by my lexicotree in the worst
case, and noticed (before sending anything to Fred) that on big datasets my
first lexico-tree implementation would challenge Fred's virtual memory (that's
only 350MB - a shame ;-).
So I had to change that.

++ If I had it to do all over - here's what I would try ....

If I want to win I should work harder (and not wait until last week for
top level algorithm replacement) - but do I still want to win ? ;-)


Still in Brussels (1030), Belgium, Europe, Earth (3rd planet on your left)
Still not wanting to share my innermost secrets ;-)
POTM idea: make an optimizing compiler for the very basic 'Brainfunct' 
a.k.a. 'brainf***' language (http://www.cats-eye.com/esoteric/bf/) 
(probably with a simple pseudo-assembler as target language)

            wordmine  SCORE=1098 (162)  V. Raghavan
wordmine submitted by V. Raghavan at raghavan-at-wipinfo.soft.net
++ Please summarize the algorithm used by wordmine

I rearrange the Input to have Largest Word search First and Later the
smaller ones.
The Algo is to search for a word in the Grid, if found, try again for
the same word..else go for the Next Word.

My Search Technique to identify a Given word is as follows

(1) Check the Letters in the word which occur < 10% of  times in the
Grid ( i.e <20  in 80*26).
      These are called Anchors.
(2)  Identify the Combination of Anchors that I want to experiment.
(3) For every  Letter in the Word,  see if I can find the same in the
Grid , which is appropriately "distanced" from the
(4)  Repeat (2) and (3) Until all combos are exhausted.

++ How much did you test?
I tested to a modest degree. I had this Idea of using srand and creating
"random" grids, but I was not able to do it.
 Overall I am not happy with the Testing I did, but ...That's all I
could afford.

++ If I had it to do all over - here's what I would try ....
Test Harder. Start a little Earlier on the POTM. Spend  more time
researching on various

I Live in Bangalore, India.
I  play with my 2 Yr Old son, PRANAV for Fun.
I am a Project Manager for a Software Development Team .
Innermost Secret -- None Really !!.

         Desperation  SCORE=1083 (311)  Pham Hai
Desperation submitted by Pham Hai at hailuong-at-hn.vnn.vn
++ Please summarize the algorithm used by Desperation
 First I use back-tracking to find as many words in the grid as possible. I
store all the words found in an array.
Then I build the relation among those words (the relation of crossing or not
crossing each other), using the Forward-Star data structure.
Last, I use simulated annealing algorithm & try to find out the best

++ How much did you test?
I don't have much time, so I have only tried on several random tests and
the system test.

I live in Vietnam
 I participate in this contest just for fun
 He he, if I tell you my secret, then it won't be a secret anymore, will it?

              seeker  SCORE=1081 (158)  Greg Youngdahl
seeker submitted by Greg Youngdahl at greg-at-ihgp.ih.lucent.com
++ Please summarize the algorithm used by seeker

	I don't suppose there is much in the way of an elegant solution beyond
bruit force, or quick and dirty in seeker.  As I read the Grid data from the
input file I do save an indication of where all the 'A's , 'B's etc are in
the grid, and then as I read in the word list I use that information to build
a list of all possible layouts of that word in the grid (ignoring any
collisions with other words).  Knowing how to find all the occurances of the
first letter of each word seemed to expedite my locating of possible
locations for the words.  Once I had a list of possible placements I then
tried combining them to come up with an answer to the problem.  Each time
I have an answer I compare it to the previous "best" answer and save it if it
is better.  I cycled through the list of placements, trying each one being
placed first, and continuing on to place the rest of the words, and then I
reorder the list, trying things like sorting it according to length or
alphabetic order.  Then I tried shuffling the list, alternately placing
words at the front or back of the list to create a new ordering.  I know
none of these are exhaustive approaches, and it will only be luck if I end
up with the best solution.

++ How much did you test?

	I used the two grids provided in the problem descriptions and mailings
plus the "system test" grid I got once I had sent in my first candidate.  I
was never able to produce the result that the top programs were generating.
I also created a random grid of the maximum size and took some docuemnt and
grabbed words from it to produce a maximum length word list, just to try
running with the stated limits.  Of course I'm sure Fred will produce an
insidious convoluted test problem that will have a huge number of possible
word placements, and I'll end up blowing the stack or running out of time
or even core dumping.

++ If I had it to do all over - here's what I would try ....

	I was thinking of a recursive solver that would pick every word
placement as the first word, and then pick every remaining word placement for
the second word, and so on such that every possible combination of solutions
would be tried, but I was sure that Fred would arrange a complicated enough
test problem that my recursion would blow the stack, even if there were time
for such thorough analysis.  An 80x26 version of the 4x5 matrix (with all
the 'T's and 'O's) could result in a huge list of potential placements which
I suspect would probably be result in more recursion than the stack could


	I live in Batavia, IL which is about 35 miles due west of Chicago
(or at least downtown Chicago).  It used to be kind of the edge between
suburbia and farmlands, but progress has taken over, and you have to get
rather farther west than Batavia to really be in farm country these days.

	Fun things include tinkering with music as a guitar/bass player,
a piano/organ hack, and recently, venturing into hard disk recording.
Someday maybe I'll actually complete my basement remodeling into a recording
studio (thats kinda fun too, but I don't spend enough time doing it).  I also
enjoy bicycling through the various bike paths in our area, (or elsewhere).
Sometimes programming is actually fun as well ;-)  I've also recently enjoyed
vacationing on the ocean shores in Hawaii (Maui) and Mexico (Playa Del Carmen)!

	I work for Lucent, writing software to help them convert our circuit
based switches to packet (or perhaps incorporate packet switching into the
existing circuit switching capability).

	Innermost Secret? ... Sometimes programming is actually fun as well.

         SuperCyborg  SCORE=1072 (249)  Navin Pai
SuperCyborg submitted by Navin Pai at navin_pai-at-usa.net
++ Please summarize the algorithm used by SuperCyborg
-- The algo used is simple pattern matching algo using principles based on
theory of lexical analysis and finite automata.. SuperCyborg first reads all
the words and creates a pattern table and then starting from position 1,1
starts looking for the words recursively that can match the pattern
generated from the list of words.

 ++ How much did you test?
-- No testing done. Since SuperCyborg is always very busy the coding was
done in a matter of few minutes and submitted as it is.

 ++ If I had it to do all over - here's what I would try ....
-- SuperCyborg will augument the current code with the repetetive
statistical search to get the best from the matrix.

-- SuperCyborg currently resides in a city called Pune, located in a state
called Maharashtra, situated in the country known an INDIA.
-- As they say in my country "WORK IS MY WORSHIP AND WORSHIP IS MY WORK". so
SuperCyborg Enjoys his work and Works for his enjoyment.
-- SuperCyborg doen not have to think for a optimized solution.. the very
first solution that oust is the optimized one.. ofcourse how you implement
is totally another issue at all.. Its all gifted from SyperCyborGod.. HAIL
-- No comments on POTM IDEAS.

        Greedy_dummy  SCORE=1040 (252)  Jaco Cronje
Greedy_dummy submitted by Jaco Cronje at s9805027-at-student.up.ac.za
	(editor - about two weeks before the contest completed,
	Jaco sent a new entry that replaced this one, but since 
	I had already run the final on Greedy_dummy, I left it in )
	Jaco was kind enough to provide a write-up on this as well.

++ Please summarize the algorithm used by Greedy_dummy

Get a maximum of 50000 possible words from the grid.
Divide them in groups just like the genetic_grouper.
When a group were <= 60 words, find the best solution.
When a group were > 60 words, pick the word with the least
number of words crossing him.
If fininshed with all the words, go and get another 50000 possible words
and do the same with them.

Keep on doing this until there are no more possible words to find
on the grid

++ How much did you test?
A lot, just like genetic_grouper

++ If I had it to do all over - here's what I would try ....
Check out Genetic_grouper, thats what I did.

   Consolation_Prize  SCORE=1026 (270)  fah other
Consolation_Prize submitted by William Little at lore-at-techie.com
++ Please summarize the algorithm used by Consolation_Prize

Consolation_Prize is a poorly made, slapped together mish-mash
of spaghetti code and diminishing goals.  I would have put more
work into creating an entry that wasn't laughable, but I had
several excuses to contend with, which took up most of my time.

++ How much did you test?

Far more time was spent testing than actually programming.
That's 80% of the problem right there.

++ If I had it to do all over - here's what I would try ....

Implement one of the ideas I had...maybe sort of GA.



WHERE I LIVE: A quaint post-industrial globe, the native
inhabitants of which all seem to fear a great demon which they
call "Wye-Too-Kay-Flaw"...Ha-ha-ha!

WHAT I DO FOR FUN: Write programs, exploit technology, watch
television, think deep thoughts, search for the perfect
flashlight, and spin straw into gold.

WHAT I DO FOR WORK: Try to keep everything from falling apart.

INNERMOST SECRET: I am wholly unable to bite the back of my own
head...nor would I want to.

POTM IDEAS: Have people try to write a program that ACTUALLY
DOES have a Y2K flaw!

          AntiBoggle  SCORE= 993 (180)  Sofiya Vasina
	Sorry ... No questionairre was returned for AntiBoggle

   Search_and_Rescue  SCORE= 985 (289)  Mark Engelberg
 Search_and_Rescue submitted by Mark Engelberg at mark.engelberg-at-bigfoot.com
 ++ Please summarize the algorithm used by Search_and_Rescue

Part of the difficulty with this contest problem is that you have to make a
lot of assumptions about what the final problem will look like in order to
create a competitive solution.  For example, if you expect the final problem
to be very hard, you must write a program that does quite a bit of analysis
to find a good solution.  However, an analytical program might easily be
beat by a faster, naive program on a small, simple grid since speed is a
tiebreaker if both programs perform perfectly.  Certain contrived inputs
will bring any algorithm to its knees.  How much should one worry about

I decided to assume that the final input would look like a fairly random
(large) grid of letters, with a (long) list of fairly normal words.  In
other words, I didn't worry too much about the possiblities of a whole grid
of As, or words that can be findable in billions of possible places within
the grid.  I hoped that my program would at least provide SOME answer in
these aberrant cases, but I didn't really worry about that too much.

The first part of the algorithm is to find all "instances" of words in the
grid (keeping in mind that a word can potentially found in more than one
place).  Since I was assuming that the grid and the word list were
"well-behaved", I just used a pretty straightforward recursive algorithm to
solve this part of the puzzle.

For each word instance, I maintain a list of all other word instances in the
grid that intersect it (i.e., share at least one coordinate).

I define a "covering" as a set of word instances that don't intersect one
another.  Every covering is a potential solution to the problem.  Generating
a covering is pretty easy, just start with an empty covering and a
"masterlist" of all word instances in the grid and do the following:
1.  Select a word instance from the masterlist and add it to the covering.
2.  Remove the selected word instance and all intersecting word instances
from the masterlist.
3.  Repeat until masterlist is empty.

In theory you can always calculate the best possible solution by generating
every possible covering using recursion, ranking them all, and printing out
the best one.  In fact, this technique can easily generate the perfect
solution for a masterlist of 30 word instances in less than a second.  But
even using a couple tricks to prune the recursion, this technique will take
over ten minutes with word lists of more than about eighty words.

With a big masterlist, there's clearly no way to generate all possible
coverings, but how can you generate just one covering that will be likely to
score well?  First we rank all the word instances by assigning some "fitness
rating".  Then we take the masterlist and an empty covering and do the
1.  Select the word instance in the masterlist that has the highest fitness
rating and add it to the covering.
2.  Remove the selected word instance and all intersecting word instances
from the masterlist.
3.  Repeat until masterlist is 30 word instances or less, and then find the
best covering for the remaining word instances using the perfect technique
described above.

So the only trick now is to find a really good fitness rating that will
cause the algorithm to make a reasonably good choice at each step in the
generation of the covering.  I experimented with many complex fitness
ratings before I finally stumbled upon a simple fitness rating that worked
astonishingly well.  The rating of each word instance is the length of the
word divided by the number of word instances that intersect it in the grid
(including itself).  In other words, we always pick the word that has the
lowest density of interconnectedness within the grid.

Hopefully the generation of one covering takes less than ten minutes.  Just
in case, I generate intermediate solutions as I parse through the inputted
word list and start finding word instances in the grid.  Also, if for some
reason an aberrant input causes over 20,000 word instances to be found, I
just stop searching for word instances and go ahead and start working on a
solution for what I've found so far.

If the first covering is generated in less than ten minutes, I go ahead and
try to generate more coverings until time is up.  I do this buy starting the
covering off with a word instance that didn't make it into any previous
solution, and then proceeding as before.  Of course, I always store the best
solution encountered so far so that I can print it out when time runs out.

 ++ How much did you test?

Unfortunately, since I was developing on Windows, it was difficult to test
the timing code with any certainty.  I conservatively set my cutoffs to
seven minutes instead of ten.

I didn't generate any of my own test inputs, I just used the ones given in
the contest and a couple of the less outlandish ones that I pulled off the
message board.

I did a lot of testing on my early prototypes, but didn't have as much time
as I would have liked to test my final submission.

 ++ If I had it to do all over - here's what I would try ....

If time permitted me to generate multiple candidate solutions, I would try
to use a few radically different fitness ratings rather than restrict myself
to small variations in coverings generated from one fitness rating.


Seattle.  Play boardgames.  Full-time daddy.

         playing_god  SCORE= 970 (187)  T.Saha R.Bhatia
playing_god submitted by Tirthankar Saha and Rajat Bhatia at tiru-at-SoftHome.net
++ Please summarize the algorithm used by playing_god

A simple backtracking recursive function is used to find the 
words from the grid.
After it has found the words, the proggy uses simulated 
annealing to get the best set of words. There are two lists :
 on[] containing words to be selected and
off[] containing the rest of the words. 
The objective function(analog of energy,
which is to be minimized) is defined as

E = ALPHA*sqr(no. of grid positions reused) +
    BETA*(total no. of letters - no. of letters in on[]) +
    GAMMA*(no. of words on the list)

Rearrangements :
  1. One word at random is selected and
     moved out of the current list(on[]) to
     another list(off[]).
  2. One word at random is selected from
     off[]  and moved to  on[].  (simple!)

The cost dE(change in E) for a rearrangement is calculated and
the rearrangement is selected with probability of exp(-dE/T),
where T = temp ; T is determined by the annealing schedule.
A maximum cut-off was set on the no. of words found for a single
dictionary entry so that the proggy doesn't run out of mem. and finishes
in alotted time. The toughest part was to determine the constants
ALPHA, BETA, GAMMA which required a lot of trial and error. The initial
temp. and the annealing schedule also demanded a lot of testing and fine
tuning. To give the same result on all platforms a random no. generator
was also added.

++ How much did you test?
A LOT.  99% testing, 1% coding.

++ If I had it to do all over - here's what I would try ....
We won't; we're sick'n'tired of optimization problems.


We live in New Delhi, India studying Instrumentation and 
Control Engg. at Delhi Institute of Technology(NSIT now).

Fun = Chat, chat, chat.. [Rajat]
      Cricket, cricket, cricket.. [Tirthankar]

The_Drunken_Lexicographer  SCORE= 963 (218)  Ken Weinert
The_Drunken_Lexicographer submitted by Ken Weinert at kenw-at-ihs.com
++ Please summarize the algorithm used by The_Drunken_Lexicographer

  I used a set of links such that all like letters were linked
together and the struct contained the row and column indicators. The 
link head contained the number of letters of that type. This allowed 
me to easily determine if there were sufficient letters present to
even attempt to find the word.

  The second thing I did was to compute the letter frequency. 
This was used to sort the words - I would add up the weight given to 
each letter and then divide by word length to give an average 
uniqueness to each word. Using that value the word list was sorted
in reverse order so words with more uniqueness were first. The theory
here was that I would be able to find words with infrequently used 
letters before the more common letters around them were used.

  Now we're ready to actually find words :-) This was done by
finding the starting letter and trying to find an adjacent letter
in the list - if the letter was found then it was put on a linked 
list of the letters for the current word. If the entire word was
found then each letter was marked as being used and the word written
out. If it wasn't found, then the next word was gone to.

  The word search itself was in a loop to catch those 
occasions where a word was contained more than once in the input.

++ How much did you test?

  I tested with the sample list and with 4 or 5 others that
I found on the chat page. What I found was that for certain inputs
my algorithm is optimal. For other inputs I could see that other
algorithms did much better than I did.

  After having had a POTM that I could actually understand and
code something that worked, I decided to leave well enough alone and
let it ride as it was. Maybe I'd get lucky that the final problem would
be one of the optimal ones for me :-)
++ If I had it to do all over - here's what I would try ....

  Not really sure - I'll be interested to see what others did
that was so much better than mine. I'll be sure to have a wall handy
for banging my head against when I look and say "Of course! Why didn't
*I* think of that?"

  Denver, CO

  PBEM games (battle and galaxy)

  Odd job programming - we take on jobs that others in the company
  say can't be done and then we do it.

  If I told you it wouldn't be a secret now, would it?

            GridWorm  SCORE= 961 (217)  Viswanathan Lakshmi
	Sorry ... No questionairre was returned for GridWorm

            EagleEye  SCORE= 954 (256)  Ionel Santa
EagleEye submitted by Ionel Santa at ionels-at-phobos.cs.unibuc.ro
++ Please summarize the algorithm used by EagleEye

    There isn't very much of an algorithm in my program. 
In the first phase I just search as many valid paths (valid 
regarding the words which must be searched) in the grid as 
possible in 200 seconds and using the available memory, and 
then I search subsets of the set of all founded paths 
such as the paths in these subsets don't use the same 
cell in the grid twice.

    In the second phase I shuffle the set of all paths randomly, 
and for every resulting ordered set I generate a subset by 
looking sequentialy at every path, and if all of it's cells 
aren't used by the previous chosen paths, I add this path to the 
subset, otherwise I ignore it. I repeat this second phase for the 
rest of the 10 minutes.

++ How much did you test?

    Not very much. I only hope that my program wouldn't crash.

++ If I had it to do all over - here's what I would try ....

    Maybe I would use the way the founded paths are distributed 
on the grid, such as I could shuffle independently subsets of 
paths which depends the most on each other (for example shuffle 
the paths which contain a cell on the row 3 independetly from 
those which contain a cell on the row 11). Anyhow it isn't 
very obvious to me how I could do that.
    Another idea would be that I might try to make improvements 
on a valid solution for example by eliminating a path (or a set of 
paths) from the solution and trying to fill the resulting free 
cells with the rest of the paths.


    I live in Bucharest, Romania.

    For fun I like to read books, to listen music, but mostly to 
program various sort of things and to solve problems (especially 
from the number theory).
    For a while I'm interested in factorising large numbers 
into prime factors, but until now I've only implemented an object 
oriented library in C++ for large numbers, and some simpler 
algorithms like pollard-rho, pollard p-1, a fast generator of the 
primes below a given limit, qudratic seave (QS). I'd like to 
continue with MPQS, ECM and GNFS (if it isn't too difficult).
    Other interests are to prove theorems with the computer, 
and to write a program for checking the correctness of a 
solution (written as formally as possible).

    I'm a programmer (by 4 months), I've worked mainly in JAVA 
and Visual-C, and now I'm involved in a computer telephony project.

    I do have some secrets, but I don't know which is the _innermost_.

           Redundant  SCORE= 950 (264)  Andrew Gauld
Redundant submitted by Andrew Gauld at andrew.gauld-at-att.com
++ Please summarize the algorithm used by Redundant

Basic scan of each unused position to see if any words can be placed any which
way.  Accept first fit.

++ How much did you test?

Test?  What's that.  Didn't have time to do much.
++ If I had it to do all over - here's what I would try ....

Spend more time on it.

          Word_Quest  SCORE= 946 (232)  Mike Maxim
	Sorry ... No questionairre was returned for Word_Quest

            WORDWORM  SCORE= 945 (248)  Mike Sweeney
	Sorry ... No questionairre was returned for WORDWORM

   The_Boswell_Alien  SCORE= 927 (255)  David Nicol
The_Boswell_Alien submitted by David Nicol at david-at-kasey.umkc.edu
++ Please summarize the algorithm used by The_Boswell_Alien

After the data is read in, an exhaustive list of all words available in
the grid is determined.

Once we've got the list, different arrangements are generted
by randomly shuffling the list and inserting into a text grid the words
from the list that will fit in without conflict.  The best solution
is remembered, but shuffling and testing continues.

When the time limit is up, we go with the best result we have found so

++ How much did you test?

I made one big test dataset and played with it for about six hours.

++ If I had it to do all over - here's what I would try ....

I tried an alternate approach involving backtracking and rearranging
good arrangements but could not get the pruning to work aggressively

Improvements to my method would involve the rearranging phase:  It could 
be done more intelligently.  Also, analysis of the problem to the point
at which it could be determined with certainty if a given solution is
best solution would render further searching unnecessary, I was not able
to find such a method.  The plan was to find a bounded method and
it and then rewrite the good algorithm in C; the plan was derailed.


I live in Kansas City and I play Go for fun and work at the university. 

I would like to see POTM adopt the Solitaire 500 contest as an annual
spring contest. ( http://www.tipjar.com/games/solitaire/500.html )

          GumpSearch  SCORE= 841 (264)  Dave Gushurst
 GumpSearch submitted by Dave Gushurst at Dguthurtsmail-at-aol.com
 ++ Please summarize the algorithm used by GumpSearch

    Brute recursive force.  For each letter in the word, look at each 
neighbor to see if its the next letter.  Repeat until found or no neighbor 
has the next letter.  If found, unwind, printing the coordinats as you go.  I 
saved a little processing time by being able to initially jumped to the first 
letter in the grid that match the first letter in the word, and then skip to 
the next occurrance.  I also saved the one  - letter words for last, figuring 
that I could simple pick up unused letters, without taking them away from 
bigger words.  I used pointers as much as possible.

 ++ How much did you test? Made sure (once I fully understood the rules) the 
I got the 'simple sample right.'    Also made up some samples.
 ++ If I had it to do all over - here's what I would try ....  Think more 
about the "curves " the the final run could have.

DO FOR FUN? Shoot pool, Beer, Computer Games, bowling

DO FOR WORK? As little as a programmer can get away with

INNERMOST SECRET? I type really really bad.

          wordsearch  SCORE= 841 (264)  Chris Smith
	Sorry ... No questionairre was returned for wordsearch

          Star_Words  SCORE= 817 (179)  Lourenco Arnasalon
	Sorry ... No questionairre was returned for Star_Words

 exhaustive_searcher  SCORE= 816 (262)  Ha Nguyen
	Sorry ... No questionairre was returned for exhaustive_searcher

  Box_With_Mouthfuls  SCORE= 815 (277)  Alexandr Ermolaev
 Box_With_Mouthfuls submitted by Alexandr Ermolaev at aler-at-bal.ru
 ++ Please summarize the algorithm used by Box_With_Mouthfuls

I picked quite sluggish language - JAVA, and consequently the central idea
is those:
at first program should waste time on careful analysis of the data, and
then, using the obtained knowledge to attempt to spare time at a phase of
the solution.

At the first phase the program attempts to estimate the size of a problem,
and to find symmetry. If the symmetry is not retrieved, and the problem
unwieldy, the program slits a board on more small-sized pieces.
The search for a solution for each piece takes place so. Analysis of a
subtask and elimination of unpromising words in the beginning is again
yielded. Then there is a complete gang of arrangements of words on a
sectional piece of the grid
Further is checked up, whether this gang of two or more non-overlapping sets
consists. If yes, the solution for each subset is separately.
The solution is searched by exhaustive search.
At a phase of exhaustive search the obtained earlier information on crosses
of words again will be used: if any word is checked up, all words, with
which one it is intercrossed, are eliminated from viewing for a while.
Due to this it is possible to reduce number of arrangements more than ten
If the sectional piece has symmetric copies, the retrieved solution is

 ++ How much did you test?
Unfortunately, not too much and not enough. But I should tell, that the best
test examples I have downloaded from The POTM Message Board.

 ++ If I had it to do all over - here's what I would try ....
I would speed up a series of methods, would add the analysis of the greater
number of types of symmetry and would test some other ideas.

I live in the town of Balakovo (Saratov region, Russia). My town is situated
on the left bank of the Volga river.
I do it for fun.
As for POTM ideas... May be try use the JAVA-translator instead JDK?

           The_Sword  SCORE= 759 (335)  Jeffrey Rainy
	Sorry ... No questionairre was returned for The_Sword

               NEWS1  SCORE= 586 ( 38)  I. Ogla
	Sorry ... No questionairre was returned for NEWS1

                NEWS  SCORE= 586 ( 38)  Aivars Zogla
NEWS submitted by Aivars Zogla at uvsk-at-fix.lv
++ See entry for NEWS1 above

        DRAW_HER_COS  SCORE= 500 ( 82)  Raghotham Murthy
DRAW_HER_COS submitted by Raghotham Murthy at raghothamsm-at-hotmail.com

Guess why my entry was called "DRAW HER COS"?!!
Well, it is an anagram of "WORD SEARCH"!!! Good eh?!!

This was my first entry to POTM. It was certainly great fun finding the 
anagram!!! As for the algorithm of my "solution", there wasn't much. Just 
find the paths of all the words in the grid such that they don't overlap 
with the previous words. The order of choosing was (horrible!!) the 
decreasing order of word length.

            Threader  SCORE= 475 ( 80)  Richard Taylor-Kenny
	Sorry ... No questionairre was returned for Threader

             kwyjibo  SCORE= 443 ( 37)  Steve Rospo
	Sorry ... No questionairre was returned for kwyjibo

The_Quantum_Leap_Accelerator  SCORE= 425 ( 65)  Rick Bischoff
	Sorry ... No questionairre was returned for The_Quantum_Leap_Accelerator

               NEWS2  SCORE= 325 ( 13)  Vars Gla
NEWS2 submitted by Vars Gla at azogla-at-ugale.edu.lv
++ See entry for NEWS1 above

                 fw1  SCORE= 192 ( 25)  Dale Swift
fw1 submitted by Dale Swift at dale.swift-at-bankerstrust.com.au
++ Please summarize the algorithm used by fw1
Brute force.

Stage1: Find all occurrences of the list words in the grid.
The input words are first sorted into a search tree, then the 
grid is scanned and matched against the tree generating a 
list of all words in all possible positions. Each candidate 
word in the resulting list obeys the rules i.e. it won't 
re-use any grid positions itself, but there is no attempt 
at this stage to check that pairs of word don't interesect.

Stage2: Find solutions using a simple depth first search. 
The only heuristic I added was to do an initial sorting of the 
candidate word by length as this seemed to produce better results 
quicker. Some code was added for performance e.g. pre-calculate 
whether words intersect and store the results in an array.

++ How much did you test?
Enough to know the program was robust.

++ If I had it to do all over - here's what I would try ....
Take a holiday from work first. The POTM threatened to takeover my 
life, what started as a quick program to try my ideas started to 
become obsessive. It took a lot of will power to just stop 
once I had a program that at least produced some results.

Sydney, Australia.

            snakepit  SCORE= 101 (  2)  Koder Wannabe
	Sorry ... No questionairre was returned for snakepit

       AtALoss4Words  SCORE=   0 (  0)  Scott August
AtALoss4Words submitted by Scott August at scottaugust-at-iname.com
++ Please summarize the algorithm used by AtALoss4Words

The over all idea was to run through as many different 
sorted word lists and different solve methods rather than 
really trying to use some type of  greedy algorithm.

The different sort methods were:
1) word length.
2) character frequency distribution normalized to word length.
3) total possible positions of word in character array.

The different solve methods were:
1) Start at one end of the sorted word list and apply that 
	word as many times as possible before going on to the next word.
2) Start at one end of the sorted word list and apply each 
	word one time, repeating the list as often as a word was applied.

Only words that existed in the character array were kept.  
A list of all possible positions that that word could take 
was built up and refined when the word was checked for existance.  
This took more time up front, but there in never a need to "search" 
for the word again - just rule out any taken positions and 
remove any invalid character positions and select one set.

The method of selecting the position set from all the possibles 
was to first rule out any already used characters, then remove 
any invalid character positions from the character position set 
that was left, then starting with one end of the word
remove all but one position, then remove any invalid positions, 
repeating until there is only one position left for each character.
Mark the character in the array by changing it to lower case.  
To start a new solve, just uppercase the character array or 
put back a copy.

++ How much did you test?

Quite a bit, the sample grids that everyone posted were very 
helpful in finding different senarios to handle that I didn't think of.

It still however, could not handle Frederic's test grids.

++ If I had it to do all over - here's what I would try ....

There were quite a few ideas, but coding them up to test them 
out was limited by time.  The 10 min time constraint made the 
problem interesting trying to figure out what areas to give up 
some time on to be able to apply it to some other areas.


Boulder, Co

Not sure what fun is anymore, I just finish my masters while 
working full time, that took 7 years.  We also just finished 
building our church with mostly voluteer labor, that took 5 years.  
Now I hope to figure out what is fun again.

I am a servo engineer at Seagate.

Only my wife will know those.

If Fred uses a mirror/laser/pattern potm, it's my fault!

          Athyagrahi  SCORE=   0 (  0)  Madhu Kurup
Athyagrahi submitted by Madhu Kurup at madhumk-at-india.com
++ Please summarize the algorithm used by Athyagrahi

Original attempt "Theosaurus Rex" was a backtracking algo. Failed from the
amazing amount of time that it took. :-)). Time ~ 10min == bad sln.

Final attempt. Simple Greedy algo. Optimization function:

number of chars in word*number of chars that will remain after word is

This however was by itself insufficient, so added some wrinkles to allow for
a more flexible choice of word from all words of choice.

++ How much did you test?

VERY LITTLE. :-((. My code is rather memory intensive, so tested on small
cases only. I tried the *HUGE* test7 put up at the POTM site, but I ran
outta memory ( 98, NT, Linux on a 64MB RAM system), so gave up! Tirthankar
gave me some inspiration, but hey, his code was far better!!!

++ If I had it to do all over - here's what I would try ....

Evolutionary Programming. No doubt about this - it would be doing
BackTracking and beating its pants off. In fact, I am implementing some of
this - if I get something interesting,  then I can crib...

Bangalore, India.

Genetic Algos. Evol Program, Quake III, Linux, Quiz, Beer.

Study Comp. Science & Engg at UVCE, Blr. I am doing my Final year for my
undergrad degree.

a) I think my entry is too lucky and too badly written to be where it is,
well, it doesn't hurt!!!.
b) I think I'm delusional......
c) The Star Wars program has failed, there are no lasers in the sky.
d) The Star Trek program has ended, the new series begins next season.

    Jus Kidding....

Minesweeper / Freecell solvers. Should be good fun either ways - they are
the typical problems that POTM revels in ;-)).

           Babysteps  SCORE=   0 (  0)  Terje Kristensen
	Sorry ... No questionairre was returned for Babysteps

        BoggleBuster  SCORE=   0 (  0)  John Aughey
	Sorry ... No questionairre was returned for BoggleBuster

 Breaking_the_Puzzle  SCORE=   0 (  0)  Nabeel Mohammed
Breaking_the_Puzzle submitted by Nabeel Mohammed at nab83-at-yahoo.com

++ Please summarize the algorithm used by Breaking_the_Puzzle
The program takes the whole grid of letters into one single dimension
array. Lets call this 'letters'. Another two dimensional array is kept to
store the locations of alphabets in the array 'letters', this is called
'alpha'. This helps to exclude any searching for a specific alphabet and
also when one location in the grid or in this case in the array 'letters'
is used that value in 'alpha' is removed and in that place -1 is set. The
marker that no more letters are present after a particular position is the
value -2. This is all the initialization required. 
A module (word_coordinates) is used to find out the letters in the grid and
decide whether they are in the correct order to form the word. It deals
with one letter at a time recursively. A letter position is found. If it is
the first letter then the coordinates are added to a global character
array. The conversion from single dimension index to double dimension index
is a simple matter of integer division. If any other letter does not match
the conditions then the next position of that letter is found. If no
positions match then the function returns -2 and a new location of the
previous letter is found and the whole process is carried on again. If the
function runs out of the first letter than it decides that the word is not
present. Whenever a location is used it is set to -1 in the array 'alpha'.
This prohibits the function to reuse this location again.

++ How much did you test?
After I wrote the program first ii tested it with the example given in the
long rules. Then I made a 20x20 grid of letters in an editor and tested
with that.  So there was really only 2 testing runs before submission as I
was running out of time.

++ If I had it to do all over - here's what I would try ....
I would not take the grid letters in a single dimension array but rather
keep it in a 2 dimension one and store both indexes in another array for
alphabet location. This would reduce the number of divisions I did in the
searching module. And also I took the wordlist in an array of string and
sorted the array in descending order using a 'bubble sort' algorithm. I
would use a 'quick sort' algorithm instead.

++ WHERE DO YOU LIVE?  I live in Dhaka, Bangladesh.

DO FOR FUN?  Watch TV and do programming.

DO FOR WORK?  I am just a student of grade XI doing my 'A' levels. So I
don't do any work as yet althogh I would love to do some.

INNERMOST SECRET?  I can't tell you that.

   BuildSquareQuilts  SCORE=   0 (  0)  Sergey Kondrachshenko
	Sorry ... No questionairre was returned for BuildSquareQuilts

                Finn  SCORE=   0 (  0)  Emil Ong
Finn submitted by Emil Ong at emilong-at-midway.uchicago.edu
++ Please summarize the algorithm used by Finn
I used a brute force algorithm.  This came in two incarnations:
1) I created a list of all possible arrangements of the words then
afterwords, I went through and threw out the ones with overlaps.  Then I
found the best solution of the remaining arrangements.  This was very bad
as it took a very long time and a lot of memory.
2) I basically did the same thing, but I tested as I created arrangements.
This ended up being much faster and requiring much less memory.

++ How much did you test?
Hardly any.  I didn't really try to figure out any good test examples.  I
just used the two tests provided.

++ If I had it to do all over - here's what I would try ....
I would rewrite it in C for speed (or maybe even Java).  When the contest
was announced, I had just learned Python and I thought that I would give
it a try.  It was easy to write the program and it ended up only being
something like 200 or 250 lines long (try doing that in C!).  Still, the
ease in writing cost me in speed.
I would also test more and rethink my algorithm. 

           Four_Eyes  SCORE=   0 (  0)  Ossama Abdel-Hamid
 Four_Eyes submitted by Ossama Abdel-Hamid at ossama_a-at-hotmail.com
 ++ Please summarize the algorithm used by Four_Eyes
 I think it is something trivial algorithm because I hadn't enogh time to
improve a good one because of studying .
on breif I all the pathes in a list then take the nonintersecting pathes
from the first to the last. then the last taken path not to take and see
which case better. then go to the previous taken path and try not to take
then test the best cases of the next pathes and so on until I reach the
first path. Thus I will have found the best pathes.

 ++ How much did you test?
I tried the tests from the message board

 ++ If I had it to do all over - here's what I would try ....
I would try to make a different algorithm.

I live in Egypt. I study in the faculty of computers and 
	information, Cairo University.
The innermost secret : I'm willing to be the programmer of the next month.
POTM IDEAS: the contest should has prizes .
	(editor note: hahahahahaha)

         GridFondler  SCORE=   0 (  0)  Bryan Slatner
	Sorry ... No questionairre was returned for GridFondler

     Gridley_DoWrite  SCORE=   0 (  0)  Jeff Stone
++ Please summarize the algorithm used by Gridley_DoWrite

As you're aware, I did not finish resolving porting problems on
Gridley_DoWrite.  Its latest incarnation worked as follows ('word' in the
following refers to any word found anywhere in the grid, not just to the

* Find all words in grid
* Assign a 'weight' to each grid letter equal to the number of words 
	that can be found using that grid letter
* Calculate word weights by adding the grid letter weights for 
	each word and dividing total by number of letters in word
* Build a linked list for each grid letter of all words that 
	can be formed using that letter
* Sort linked lists from lowest to highest word weights

At this point I played with different word-fitting algorithms.  
The one I thought had the best promise was to assign words 
from all four corners towards the center at the same time, 
selecting the word that remained closest to the
corner, without isolating any grid letters if possible.

++ How much did you test?

Fairly extensively.  Program worked fine on my machine.  I 
created a large test grid with only three letters, then 
placed many words from 1 to 80 characters in the word list and let it run.

++ If I had it to do all over - here's what I would try ....

I won't know until I review the POTM results.


Live in Chicago.  Fun:  archery, racquetball, good restaurants with good wines.
Work:  high-level networking consultant.  Secrets:  That would be telling.  

    Im_Getting_Dizzy  SCORE=   0 (  0)  Scott Matthesen
	Sorry ... No questionairre was returned for Im_Getting_Dizzy

           LetterRip  SCORE=   0 (  0)  Michael Rios
LetterRip submitted by Michael Rios at riosm-at-lucent.com
++ Please summarize the algorithm used by LetterRip

Do nothing.  Do it quickly.

++ How much did you test?

Once.  It did nothing, and quickly.

++ If I had it to do all over - here's what I would try ....

Coming up with a cool name *and* a method of solving the puzzle.


I live in Schaumburg, Illinois and work at Lucent in 
Naperville,  Illinois.  For fun, I play racquetball and 
tennis, I bowl, and I construct puzzles for Games 
magazine and an upcoming book.  My innermost secret must 
remain my innermost secret, but I'll tell you this: one day 
I hope to run a contest just like this one.  :-)

          Lexomaniac  SCORE=   0 (  0)  Austin Green
	Sorry ... No questionairre was returned for Lexomaniac

   Microword_Soft_99  SCORE=   0 (  0)  Boris Bulayev
	Sorry ... No questionairre was returned for Microword_Soft_99

       Millenium_Bug  SCORE=   0 (  0)  Petros Tsantoulis
Millenium_Bug submitted by Petros Tsantoulis at bm-gugduc-at-otenet.gr
++ Please summarize the algorithm used by Millenium_Bug

  I perform a simple depth first search, stopping if the words 
  intersect and if the addition of a new word will surely prevent us
  from reaching the (current) maximum score, thus effectively eliminating
  some search paths. This is quite naive.
++ How much did you test?

  I only used 2 very simple tests. I was really bored to do rigorous 

++ If I had it to do all over - here's what I would try ....

  I am currently trying to find a way to transform this problem to some
  better known computer science problem, so that I can greatly reduce
  searching. (I tried another approach, but it used too much memory). 

  a) Greece, b) Write computer programs and play the piano, 
	c) Study, d) I spend more time on my computer than with 
	my friends, e) I do have one, but I will have to think it over. 

My_Working_Wordsearch  SCORE=   0 (  0)  Joel Donovan
My_Working_Wordsearch submitted by Joel Donovan at jmaster512-at-home.com
++ Please summarize the algorithm used by My_Working_Wordsearch

++ How much did you test?

++ If I had it to do all over - here's what I would try ....


 My algorithm was in the "search" function which called the recursive
"check_at" function.  The search function, starting at the top left
corner of the puzzle, searches for the first letter in the target word. 
Then it relays the letter's coordinates, along with the target word with
the first (found) letter removed, to check_at.

 Check_at looks in each direction, starting with NORTH and moving
clockwise, to find the word's new first letter.  If check_at encounters
the letter, it relays this letter's coordinates and the word (minus the
found letter) to a recursive call of check_at.  If check_at cannot find
the letter in any of the eight possible directions, it returns a false
value.  If check_at returns false to a previous call to check_at, the
previous call continues to look in the remaining directions to find the
letter again.  Check_at returns true if there is no word to find
(meaning that all letters were successfully found and removed), or if
its own call to check_at also returned true.

 When the initial call to check_at returns true to the search function,
the coordinates are printed and search continues to look for other
instances of the target word until it reaches the bottom right corner of
the puzzle.

// --------

I tested as much as possible, but it wasn't until December 22 that the
"multiple instance" version of my word search program ran correctly.  As
it turned out later, I misinterpreted the contest rules to say that grid
letters could be reused between distinct words, just not when finding a
copy of the same word or the remaining letters of the word.  I now have
a running version that follows the rules correctly, but I have not
tested it enough to be assured of its accuracy.

// --------

If I could start over, I would have started earlier by a month or two. 
I also would have written the program to follow the rules correctly the
first time.  As for my algorithm, it would stay about the same.  I would
spend a lot longer debugging, too.

// --------

I live in a small suburb outside Pittsburgh, PA.  I love video games,
especially Zelda.  I own an N64 and a Game boy.  Just recently I got
hooked on, of all things, Pokemon!  I'm a sophomore in high school, and
I get pretty good grades.  I've been programming since I was ten.  I
really don't feel like revealing my innermost secrets to whomever plans
to read this.

       My_first_POTM  SCORE=   0 (  0)  Alex Geevarghese
	Sorry ... No questionairre was returned for My_first_POTM

                 OED  SCORE=   0 (  0)  Martin Fouts
	Sorry ... No questionairre was returned for OED

                  PI  SCORE=   0 (  0)  Amir Shimoni
	Sorry ... No questionairre was returned for PI

          PathFinder  SCORE=   0 (  0)  Fabio Pistolesi
PathFinder submitted by Fabio Pistolesi at fabiop-at-france.sun.com
++ Please summarize the algorithm used by PathFinder

 The algorithm creates a list of paths ( sequence of input grid's cells) where
 an input word lays. At the same time, it keeps track of which cells are used
 giving each cell a hit counter (number of paths through the cell).
 From these two pieces, a collision matrix is constructed, where path i 
 collides with path j iff matrix(i,j)=1. Note that matrix(i,j)=matrix(j,i)
 That's the starting point for the set of non-colliding paths.

++ How much did you test?

 I used 10 grids, including the system test. I can say I am confident on
 the parts finding paths and their collision. Not so confident on the set

++ If I had it to do all over - here's what I would try ....

 Use only half of the collision matrix (too much memory use);
 a smarter path finding algorithm ( better using hit counters...);
 faster set search.


 I currently reside in Grenoble, France, and work as QA engineer.

             SHIVA99  SCORE=   0 (  0)  Sourav Chakraborty
++ Please summarize the algorithm used by SHIVA99

Step 1: Transfer the grid to a 2-D array.

Step 2: Transfer the words to an array of strings.

Step3:  While words are left to be searched do
        Step 4: Choose the current word.
        Step 6: Search for the first alphabet of the word in the grid.
        Step 7: While all alphabets of the word are not finished or, the
         last search has not failed do
         Step 8: If the current alphabet exists do
         Step 9: Search for the next alphabet in the grid
          such that it is either diagonaly or, perpendicularly
          located to the current previous alphabet.See if this
          location has already been used in a previous occurence
          of this word.

         Step 10: If successful save the location in the 2-D array.
        Step 11: Goto Step 7.
        EndW(Step 7)
        Step 12: Again go to Step 6 to look for anotherinstance of the word.
        EndW(Step 3)

++ How much did you test?
The test-grids sent by you were fully tested on all the codes.

++ If I had it to do all over - here's what I would try ....

New Delhi,India.
Music,Reading,Coding and my new found hobby->OS kernel dev.
I am a student in Univesity Of Delhi and study B.Sc.(H),Comp.Sc.
None.Only that why my codes never worked at your end while all test-uns used
to run
fine at my place(innermost question).
Such realistic problems are a fun to solve.I'm looking forward to more of
these and surely am interested to know when's the next POTM contest.

      Search-a-delic  SCORE=   0 (  0)  Kevin Schuetz
Search-a-delic submitted by Kevin Schuetz at meat-at-wctc.net
++ Please summarize the algorithm used by Search-a-delic
First, all possible occurrences in the puzzle needed to be found.  For each
position in the grid, the algorithm would recurse through all eight
surrounding positions, and short-circuit if the letters can't possibly form
a word.  All occurrences found are thrown into a linked list.
The second part of the algorithm attempts to find the optimum combination of
occurrences to keep.  To test all possible combinations would take hours, so
I wrote an algorithm to attempt to break all of the occurrences into
non-colliding groups, then attack each group individually (this would
decrease the amount of time to seconds rather than hours).
I tried this on my own test grid and it worked great.  However I didn't have
the right vision at the time...I tried the official test grid, and it turned
out that none of the occurrences could be isolated, so it was one big group.
This would take hours as well!
So I then had to try another algorithm for the second part.  I made a second
linked list containing conflicts...each occurrence had pointers to conflict
nodes.  When one of the occurrences dies, its conflicts die too.
I made a loop that iterates all occurrences, and finds the best one to kill
(by counting its letters and subtracting conflicting occurrences' letters).
Throughout all occurrences, the one with the lowest net value is killed, and
so are the corresponding conflicts.  If a dead occurrence is found with no
conflicts, it is revived.
The loop repeats until no occurrence if found to kill.  Any occurrences that
are still alive are the ones that become the final result.
It is a lot faster than trying all combinations, but it is still less than
perfect because no matter how much tweaking I did, it only found 82 letters
in the test grid!

++ How much did you test?
I tested several times with my own test grid, but by the time I had gotten
the official test grid I discovered that my own test grid wasn't very good!
I then used the official test grid for the remainder of my work.  Though my
test data wasn't very diverse, I probably put 50% of my time tweaking the
program to work with the test grid.

++ If I had it to do all over - here's what I would try ....
I started over two or three times already. =)
I don't know what I would try...I considered dozens of ways already, and I
found the way I described above to work the best for me.

I live in Madison, WI.  For fun:  listening to and composing music,
roadtrips, weightlifting, playing bass guitar, computer games, and
programming!  For work:  designing and programming 3-D graphics.  Innermost
secret:  it's so deep down I can't find it!  POTM ideas:  I can't think of
any specific ideas, but card games are a good place to find potential
math/programming problems.

        Seattle_Slew  SCORE=   0 (  0)  David Morgan
	Sorry ... No questionairre was returned for Seattle_Slew

               Snake  SCORE=   0 (  0)  Kris Kenis
	Sorry ... No questionairre was returned for Snake

              Spread  SCORE=   0 (  0)  Tran Cong_Thu
	Sorry ... No questionairre was returned for Spread

       StPaulisQuilt  SCORE=   0 (  0)  Carl Burke
	Sorry ... No questionairre was returned for StPaulisQuilt

    Twisted_SeArChOr  SCORE=   0 (  0)  Marcin Mika
Twisted_SeArChOr submitted by Marcin Mika at konsept48-at-hotmail.com
++ Please summarize the algorithm used by Twisted_SeArChOr
find every single possible arrangement of the words.  
	then pick the best one.

++ How much did you test?
a lot.

++ If I had it to do all over - here's what I would try ....
i would try to think of a more efficient different algorithm

Toronto, Canada.

Lose POTM contests.

as little as possible.

I spell the word 'concept' with a K.

           Vspikeman  SCORE=   0 (  0)  Spiros Mourgoyiannis
	Sorry ... No questionairre was returned for Vspikeman

      We_Be_Searchin  SCORE=   0 (  0)  Tim Brierley
 We_Be_Searchin submitted by Tim Brierley at tbrierley-at-home.com
 ++ Please summarize the algorithm used by We_Be_Searchin
 after finding all of the possible paths for each word, the program fills in
a grid that is used to weight the paths.
each path gets it's own weight (so a single word, if it has 13 differnt ways
to be found on the grid, will have 13 diferent weights)
to compute the weight, the program sums the lengths of each word that has a
path using each grid element.
It then sums the value placed in each element along each of the paths,
creating the weight of the path.
the paths are then sorted by their weight, and used in order (after a check
is used to figure if the path can be added without repeating a letter)
the paths are then sorted by their weight minus the square of the word
length, as well as 0 1 and 2 times the word length (just to get a variation
in data tested, I found the algorighim fast enough to handle 4 checks...)

 ++ How much did you test?
 I am not big on testing, I tested enough to get it to run without errors
for many test cases
I put in a few test cases that I tought would mess up my algorithim, but the
program worked fine and found great solutions for each case... maybe about 3
hours of testing and fixing overall (about 30 hours total programming? I
dont remember, but I thikn it was about that)

 ++ If I had it to do all over - here's what I would try ....
I would try to find a way to serach for the BEST solution, then obviously
optimize it a bunch.  my method searched for a good, but not best solution.
I would want to make guesses at the best solution and keep trying unjtil I
hit a 10 minute timer.

Howard county MD.
I like to play Ulimate frisbee (but I dont get to do that much)
I like to canoe (but I dont get to do that much)
I like to ski ( snow and water)
hmm...  lots of other things, hang out with my friends

I dont work, but I do have an internship with a local computer gaming

my innermost secret would be that I eat cows...
yes, I know... all of these programming contests focusing on cows
USACO for example ... I admit it, I EAT cows....  I"M SORRY!!!!

  Websters_Nightmare  SCORE=   0 (  0)  Joshua X
Websters_Nightmare submitted by Joshua X at
++ Please summarize the algorithm used by
First I used a recursive algorythm to find all
possible combinations of all possible words from the
word list, with no regard for if they re-used letters
or not.  Second I used a set of weighted rules and
assigned a score to each word, and evaluated the words
score to determine weather it was better to use that
word, and sack those that it overlapped, or to use the
words that it overlapped and to abandon it instead.

++ How much did you test?

Obviously, not enough.

++ If I had it to do all over - here's what I would
try ....

I would put more effort into the elimination


I live in southern California. For fun I travel,
program, see movies and write.  I am a software
engineer.  I think it would be fun to have a POTM
where the entry programs each controlled a mobile tank
or warrior in an arena, and the winner would be the
last surviving tank or warrior.

    Where_Is_My_Word  SCORE=   0 (  0)  Mike Arsenault
	Sorry ... No questionairre was returned for Where_Is_My_Word

             Windbag  SCORE=   0 (  0)  Guy Smith
Windbag submitted by Guy Smith at guy-at-cometsystems.com
++ Please summarize the algorithm used by Windbag

Algorithm?  What's an algorithm?  Windbag used the brainless approach of
random brute force.  First it identified all possible words in the array,
and then it chose groups of words at random until time ran out, keeping the
best combination.  That is, unless it core dumped.  I used an inefficient
method of identifying words, in which it first identified all possible words
(including words that interesected themselves) and then eliminated the
self-intersecting words.  I expect this would cause very bad behavior for
ugly inputs (which is probably what it was given on the final run).  In
fact, I started writing a new version that eliminated this drawback, but I
only got as far as reading in and identifying all non-self-intersecting
words in one pass before I got bored.

++ How much did you test?

Test?  People test their code?  Hmmm...

++ If I had it to do all over - here's what I would try ....

Well, I would certainly have spent more than one week writing this.  One
thing I would do would be to write it in pure C, instead of C++.  I used C++
solely for the STL, and after my first draft (the only draft I completed, in
fact) I decided that there were much more efficient data structures, that
were not easily expressed with STL.  Also, I would do some testing ;-)


New York City.  For fun: Pool.  Beer.  Evolvable Hardware.  For work: I
write network servers.  Innermost secret: it's a SECRET!  POTM ideas: I have
nothing worthy of the POTM.  

        Word-O-Matic  SCORE=   0 (  0)  Peter Szkiela
Word-O-Matic submitted by Peter Szkiela at petszk-at-crossecom.com.au
++ Please summarize the algorithm used by Word-O-Matic
Simply to search the word sleuth as many times as 
possible in 9 minutes, 55 seconds, searching for the 
words in a few different orders;
Smallest first
Largest first
and, when finding a word, 50% of the time searching for the 
same word again immediately, or otherwise moving that word 
to the end of the list and moving on to the next word

++ How much did you test?
I came up with about half a dozen different sized grids, 
including 2 80 x 26 sleuths

++ If I had it to do all over - here's what I would try ....
Exactly the same thing...although at this point I don't 
know how Word-O-Matic did in the final results... 8)

Live: Perth, Australia
Fun: Programming, Soccer, Basketball
Work: Programming
Innermost Secret: I'd like to be able to list Soccer and / or 
Basketball as "work".

            WordSkop  SCORE=   0 (  0)  Pavlodar AlauTransGaz
	Sorry ... No questionairre was returned for WordSkop

            WordStir  SCORE=   0 (  0)  Pete Wells
WordStir submitted by Pete Wells at pwells-at-chez.com
++ Please summarize the algorithm used by WordStir

To be honest, it's so long since I looked at it, I've got to go back and

OK. Now I remember. It basically does a brute force check of all the
combinations it has time for, starting with the longest words. Once it
has placed the large words, it just runs through the remaining words (in
decreasing size order) and if the word fits, it uses it.

++ How much did you test?

Not very much. I tried one of the large tests posted on the message
board, and it didn't cope well at all. Unfortunately, I haven't really
had time since then to improve it.

++ If I had it to do all over - here's what I would try ....

I'd probably try something similar again. I'm a simple soul.


Warwickshire, England. Fun... what's fun? I'm a software engineer.

Secrets are just that - secret. You're not supposed to tell anyone
(unless it's about someone else :-)

         WordWiggler  SCORE=   0 (  0)  Don Kirkby
WordWiggler submitted by Don Kirkby at DonKirkby-at-SierraSys.com
++ Please summarize the algorithm used by WordWiggler
	Search for all occurrences of words in the list, regardless of
overlap. Record their positions in a list.
	Use best first search to find the best solution we can. Each move
consists of adding a word from the found list to a grid.
++ How much did you test?
	I made up my own grid and list, checked that the found list was
correct, and then ran the program. I checked to make sure there were no
overlaps and submitted it.
++ If I had it to do all over - here's what I would try ....
	I used extremely simple data structures to get the algorithm
working. The next step would be to replace the arrays with a heap and a
search tree. Another thing to try would be to abandon some search paths as
	Vancouver, B.C.
	Ultimate Frisbee
	Software developer for Sierra Systems Consultants Group
	My middle name is not Louise.

          Wordofobia  SCORE=   0 (  0)  Henco Cronje
Wordofobia submitted by Henco Cronje at Henco.Cronje-at-eskom.co.za=20
++ Please summarize the algorithm used by Wordofobia

I used a recursive function to search for all the words and 
stored it into an array.
After that I sorted the array from the longest to the 
shotest word and then placed the
longest word and then the next one if it doesn't cross.

++ How much did you test?

I tested my program with a few files send to me by my brother,
 Jaco Cronje the winner!!

++ If I had it to do all over - here's what I would try ....

A function selecting random words for 10 minutes and then output 
the best solution found.
(This was some advise from my brother!)


I live in South Africa and I'm an Electrical Engineer. I program for 
fun and I build amplifiers and subwoofers.
I'm working for an electrical utility Eskom in SA doing protection relay
settings. I don't know for how long because myself and my brother, 
Jaco are starting a computer business.

          Wordsearch  SCORE=   0 (  0)  John Swift
Wordsearch submitted by John Swift at john-at-enigma-jb.freeserve.co.uk
++ Please summarize the algorithm used by Wordsearch
No detail went into the algorithm for my submission, It just sorts the
words longest first and tries to find them.  I created an array that flags
when each letter is used, and searches repeatedly for matching words with
letters that have not been used.

++ How much did you test?
>Not very much, that's why I'm only in the middle of the list.

++ If I had it to do all over - here's what I would try ....
Possible try to permutate the word selection, converting to assembly for
more speed because of the overhead.

I was born and bred in Barnsley, England.
I like playing snooker and golf also dabbling in most forms of
I work for PLM Redfearn as a data controller on a SAP system.

                bfai  SCORE=   0 (  0)  Sam Holden
	Sorry ... No questionairre was returned for bfai

             bon_mot  SCORE=   0 (  0)  John Linderman
bon_mot submitted by John Linderman at jpl-at-research.att.com
++ Please summarize the algorithm used by bon_mot

1) Find all grid sequences that correspond to items in the word list.
2) Select a grid position that is shared by two or more grid sequences.
3) Sort the grid sequences that share that position by decreasing length.
4) Eliminate ALL the sequences in conflict there, and try plugging
   in each sequence (and no sequence at all), eliminating other
   sequences that are in conflict at other positions with each candidate.
5) Repeat 2-5 until no conflicts remain, noting board score of result.
6) Prune at step 4 when best possible score is lower than best observed score.

++ How much did you test?

Quite a bit, up to Thanksgiving, when work intruded on my fun.
The testing revealed many flaws, and I'll have to rely more on
luck than good design to not run out of memory in the finals.

++ If I had it to do all over - here's what I would try ....

It's clear that some boards (like all A's, with a long string of
A's as the only word in the word list) will blow away memory before
I have a list of all sequences in the grid.  I need to defend better
against such possibilities.  With hundreds of conflicts, my pruning doesn't
prevent ``getting trapped in an unproductive corner of the search space''.
Instead of the current depth-first search, I'd almost surely do better
to do a quick search of all the conflicts at one position, then conduct
another search below the word sequence that got the highest score,
and so on.  This helps avoid the built-in preference for long words,
when shorter words are actually best (and Fred warned us he'd likely
pick a puzzle where that would be the case).


I confess great glee at my entry name, ``bon_mot''.  The dictionary
definition is ``a clever saying'', satisfying the universal
smart-ass urge to take Fred literally when he specifies ``something
clever, please'', for entry names.  But the underlying etymology
is ``good word'', which is a pretty respectable description
of my algorithm, trying to select a ``good word'' from the choices at
each point of conflict.  Doesn't take much to make ME gleeful.

              brutus  SCORE=   0 (  0)  Hampus Weddig
	Sorry ... No questionairre was returned for brutus

         chomp_words  SCORE=   0 (  0)  Andre Wilhelmus
chomp_words submitted by Andre Wilhelmus at wilhelmus-at-lucent.com
++ Please summarize the algorithm used by chomp_words
simulated annealing.

++ How much did you test?
As much as possible but it can't handle a devious grid.

++ If I had it to do all over - here's what I would try ....
Enter earlier.

The Netherlands.
Programming kuang, an Expert Security System for Unix

      cookie_monster  SCORE=   0 (  0)  Vasilis Vasaitis
cookie_monster submitted by Vasilis Vasaitis at vvas-at-egnatia.ee.auth.gr
++ Please summarize the algorithm used by cookie_monster

  The only version of my code that I submitted was a breadth-first
exhaustive searcher that easily consumed all virtual memory, and its main
purpose was to ensure portability to the POTM machine as well as to provide
me with the system test. (The name of the program was inspired by the Cookie
Monster, a character in the TV series for children `Open Sesame', that ate a
lot of cookies, just like this code eats a lot of memory). Of course, this
could achieve nothing but die with a bang.

  I will now proceed to describe the algorithm that I thought of afterwards.
In this description I will use the word `appearance' to refer to all the
places that the words of the test appear in the grid. The algorithm was
supposed to be incremental (but see below) and it goes like this:

  - If you are examining only one (1) appearance of the words, then the best
thing you can do is use it and thus have the optimal solution (number of
letters and number of words).

  - Let's assume now that you are examining k appearances, and you have
found the best selection of those that forms the optimal solution.

  - Now suppose you want to go to k+1 appearances and have the optimal
solution. You have two choices:

    1) You don't include the new appearance and leave everything as it is.

    2) You include the new appearance (a) and exclude from the solution all
  the appearances with which it has common letters (b). You then proceed to
  include all the previously excluded appearances that no longer conflict
  with any of the already included appearances (c).

  This algorithm seemed perfect for a while. It seemed to be a fast,
incremental and polynomial-complexity algorithm. Unfortunately, there's a
catch: in (2.c), how do you know which appearances to include from those
available? It turns out that this is a subproblem of the original problem,
so recursion is not avoided. Of course, the algorithm was still worth a try,
and with some aggressive caching it might do well, but it had become quite
complex already, and it was at that moment that my free time was
exterminated. So it was left unfinished.

++ How much did you test?

  The code that was submitted ran out of memory on the system test already.
The next code was never finished, so no testing was possible or even

++ If I had it to do all over - here's what I would try ....

  I would finish the algorithm that was described in the first answer.


  I live in Thessaloniki, the second biggest city of Greece. I am a student
of the computer science department of the local university. For fun I like
living, although it doesn't always turn out to provide much fun. As for an
innermost secret, I'm still searching for it myself. I'd like to see a POTM
contest that the description of the problem actively involves the
POTM-master himself (like `find the P.M' or something, I don't know). I
think it would provide us with tons of amusement.

              eQiNoX  SCORE=   0 (  0)  Vinod Pathangay
	Sorry ... No questionairre was returned for eQiNoX

              eRaZOr  SCORE=   0 (  0)  eRaZ0r X
	Sorry ... No questionairre was returned for eRaZOr

            frogstar  SCORE=   0 (  0)  Kurt Van_den_Branden
frogstar submitted by Kurt Van_den_Branden at kurtvdb-at-village.uunet.be
++ Please summarize the algorithm used by frogstar
I started out with an algorithm that would look up all the possible
words in the grid and then tried to find the best possible combination.
the problem with that one is that on some inputs you can find so many
words that the program would go out of memory or out of time before even
finding all the possible words.

the last program I submitted takes a position somewhere in the grid and
starts looking for a word (it searches a little bit at random so you
get different results if you run it more than once). if it has found
a word it starts looking again from a position with the least free 
neigbour positions.
this gets repeated until no new words can be added.
++ How much did you test?
not enough!
I used the system test files and some files I encountered in
the discussion board but that's about it.
++ If I had it to do all over - here's what I would try ....
I don't really know. I looked at several algorithms, but none
of them was good for all kinds of input. Some would get a good result
on not so difficult input, but would fail completely on the harder problems.
others could give you a result on almost all inputs, but never a good one.
I'm from Belgium, I'm a software engineer, I play gipf and other board games.

i_am_searching_in_the_night  SCORE=   0 (  0)  Vangelis Rokas
	Sorry ... No questionairre was returned for i_am_searching_in_the_night

           lotto2000  SCORE=   0 (  0)  Luc Kumps
	Sorry ... No questionairre was returned for lotto2000

            mycelium  SCORE=   0 (  0)  Doug McIlroy
mycelium submitted by Doug McIlroy at doug-at-cs.dartmouth.edu 

There were two generations of mycelium, all knowingly vulnerable
to a simple adversary that had too many matches, e.g. a checkerboard
of As and Bs, with a vocabulary that consists of a lot of different
strings of As and Bs, or even a layout of all As with a vocabulary
of one string of 20 As.  Both versions first located
all matches then attempted to tile the layout with matches.

Locating matches: put all the strings in a trie, then for each
cell in the layout, recursively walk paths beginning at that cell
and at the root of the trie to find all the words that begin
at the cell.

Paths in the layout were represented by bit vectors so that
collisons of one path with itself or between two paths
could be quickly located.  Minor auxiliary info kept with
each bit vector avoided examining whole words of zeros.

Tiling:  The first version greedily went down the list
of strings, stuffing every string that didn't collide with
what had already been placed into the layout.  Different 
solutions were found by randomly permuting the list of
strings, and the best solution was picked.  Greedy
solutions were improved by local search:  all the
strings that intersected a small block of the layout
were deleted from the solution, and the same greedy
method was used (on a vocabulary pertinent to that block)
to find a better collection of strings to cover that
part of the layout.  Separate vocabularies were precomputed
for each of the many (overlapping) blocks in the layout.

The second and final tiling method was nonheuristic--a
dynamic program.  Treat the cells in raster order.  For
the best ways to cover cells numbered less than k, keep a
"frontier"--the set of different patterns (bit vectors) in which
cells numbered greater than k would be covered by the strings
in an optimal covering up to k.    The frontier for k+1 is 
easily calculated from the frontier for k.  The final frontier
is empty and the covering that led to it is optimal.

++ How much did you test?
Not a whole lot; enough to verify that the code worked
on Fred's three problems, on a full-size layout with
instances of Fred's two smaller problems scattered
through it (which showed up troubles with bit vectors
of size greater than 64), and on a  random layout of letters
constructed with dictinoary frequencies and a random list
of words taken from the dictonary--both of max size.
These tests were intended to assess correctness more than
speed.  Only minor speed tuning was done.

++ If I had it to do all over - here's what I would try .... 
If I had thought of some approach that would work equally
well on the "general-case" problems for which my algorithms
were suited and on the hard low-entropy cases mentioned above,
I'd have done it.  I await eagerly to see who found a way
to cover both.

Hanover, New Hampshire (Dartmouth).  As a retiree, I work for fun:
a bit of teaching, a bit of puttering (potming, gardening, woodworking), 
a bit of hiking and skiing, a bit of travel.  I try to play hockey 
with folks 1/3 my age.  If you like  finding patterns with
words a la this potm, peek at http://www.cs.dartmouth.edu/~doug.

           skateword  SCORE=   0 (  0)  Arkady Zhikharev
	Sorry ... No questionairre was returned for skateword

       trietrieagain  SCORE=   0 (  0)  Hank Turowski
	Sorry ... No questionairre was returned for trietrieagain

         vanderZilla  SCORE=   0 (  0)  POTM Junkie
vanderZilla submitted by POTM Junkie at potm_junkie-at-yahoo.com
++ Please summarize the algorithm used by vanderZilla

Brain-dead. Probably not one of the 10% that finished. Oh well, it's not the
first time I didn't win the POTM. I think I'm starting to get used to it. I
have a perfect record to maintain: oh-for-nine.

++ How much did you test?

Test? Isn't "test" a four-letter word?

++ If I had it to do all over - here's what I would try ....

I'd be a professional golfer.


Live: Colorado (USA).
Fun: Mountain-things.
Work: Another four-letter word. I'm not looking forward to going back in the
new year (the new mu-len-eee-um . . . ooooohhhhh). I think it's in poor taste
for you to bring the subject up, Fred.
Innermost secret: Bozo.
POTM Ideas: I think Fred categorically rejects all my great POTM ideas.
	(editor ... not true - I read them first)

        wheres_waldo  SCORE=   0 (  0)  Daniel Klein
	Sorry ... No questionairre was returned for wheres_waldo

              word2k  SCORE=   0 (  0)  Colin Rafferty
word2k submitted by Colin Rafferty at colin.rafferty-at-msdw.com
++ Please summarize the algorithm used by word2k

First, I take each word, and find all possible instances of it in the
grid.  In the system_test, this is cheap.  In the `toot' example, this
can be pretty expensive.  Oh well.

I then take the list of possible words, and use a dumb exponential
search to find the best set of these.

I try to make this second part fast by having a bitmask of the grid
representing a word, and then checking for overlaps is just a binary

Of course, I have timing checks, because I don't think that my program 
would even finish the system_test in the rest of my lifetime, let
alone the ten minutes alloted.

++ How much did you test?

My testing strategy was to keep submitting it to you, and having you
tell me what was wrong.

At the last minute, I browsed the message board, and realized that I
might never get through the first part of my program in ten minutes.

Boy would that have been embarrassing.

I used Quantify to try and squeeze some extra paths out of my code.
It seemed to work.  I get 79 characters instead of 78.

++ If I had it to do all over - here's what I would try ....

Try to create groups of words and test them against each other.  While
that algorithm is still O(2^n), it might get better partial results.


Brooklyn, New York.


Bicycling and volleyball.


Write C++ for a Major Investment Bank.


I wish I were a Fireman.

         wordStrider  SCORE=   0 (  0)  Michael Mossey
	Sorry ... No questionairre was returned for wordStrider

          word_winds  SCORE=   0 (  0)  Alice McRae
word_winds submitted by Alice McRae at aam-at-cs.appstate.edu
++ Please summarize the algorithm used by word_winds

   My first algorithm tried to find all occurrences of all the
   words, but I didn't submit it.  It bombed on examples where 
   the number of paths exceeded available memory. The second 
   algorithm was written to save face, because I had told my students 
   I would enter and I was trying to encourage them to enter.

   The submitted algorithm is a genetic algorithm.  It creates
   a population of random solutions. Then repeatedly, until
   time runs out or no more progress is being made, it picks pairs of
   parent solutions and chooses some words found in the first solution and
   other words found in the second solution to create new child solutions.  

++ How much did you test?
   I didn't test with many hard examples, except Frederic vdP's
   and after Christmas with Jaco Cronje's.  My program didn't
   do too well on the hard examples.  My home-made examples 
   were too easy.

   I tested my first algorithm on Frederic vdP's smaller
   example enough to learn that  my algorithm was doomed.  
   That was a difficult practice example, and I am sure many
   appreciated his test file.  My final program did run on his 
   smaller file, but my results were not as good as many others.

++ If I had it to do all over - here's what I would try ....

   I would definitely spend more time on a design
   for finding the words quickly.  I originally thought the problem 
   of finding the words was trivial (I was sure wrong), and that the 
   main problem was determining which occurrences of the words should 
   be used.  I already had a reliable heuristic for weighted independent 
   set, and I thought the problem of choosing non-overlapping words could 
   be reduced to a weighted independent set problem.

   Live: Boone, NC.
   Fun:  Hiking, cryptic crosswords, puzzles and games
   Work: Teach at Appalachian State University
   Innermost secret: I've always wanted to do medical research and be a 
      a part of finding a cure for some dreaded disease.
   POTM ideas: someone could start a POTM-Jr (some other name) for 
        advanced high-school computer science students or perhaps 
        allowing each high school class to contribute a class entry.
        Programming contests for high school students generally focus 
        on quickly writing the code for problems, and not on design
        or testing. I think students would learn more from contests 
        like these.  

            wordhunt  SCORE=   0 (  0)  David Wells
wordhunt submitted by David Wells at dwells-at-dnaco.net
++ Please summarize the algorithm used by wordhunt
Make a loop to:
  eliminate all invalid words
  select one at random
  remove word
  End Loop

++ How much did you test?
Not much (about 20 minutes)

++ If I had it to do all over - here's what I would try ....
Put more time on the problem.

I live in Englewood Ohio
I like doing puzzels and playing on the computer
I am a Software Engineer

                  ws  SCORE=   0 (  0)  Bernard Hatt
	Sorry ... No questionairre was returned for ws

                zmiy  SCORE=   0 (  0)  Serge Sivkov
	Sorry ... No questionairre was returned for zmiy

Make your own free website on Tripod.com