%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% h1. Go go gallimaufry
(markus) ~/projects/osbridge/gallimaufry > dict gallimaufry | grep -vE '1913|[ ]{13}'
2 definitions found

From The Collaborative International Dictionary of English v.0.48 [gcide]:

  Gallimaufry \Gal`li*mau"fry\, n.; pl. {Gallimaufries}. [F.
     galimafr['e]e a sort of ragout or mixed hash of different
     meats.]

     1. A hash of various kinds of meats, a ragout.
     2. Any absurd medley; a hotchpotch.

From WordNet (r) 3.0 (2006) [wn]:

  gallimaufry
      n 1: a motley assortment of things [syn: {odds and ends},
           {oddments}, {melange}, {farrago}, {ragbag}, {mishmash},
           {mingle-mangle}, {hodgepodge}, {hotchpotch}, {gallimaufry},
           {omnium-gatherum}]
h1. 0. This is going to get a little confusing
!={height:90%}cheer.jpg! !cc-by-nc.png! h2. ...so let's warm up a bit first
(for next year) write the name of a scripting language, a zoo animal, and a colour. (BTW, this talk is loaded with teasers for next year's talk; if you think you've caught the pattern come up afterwards) Now write PIN/password Is everyone confused? Then let's dive in. 1. Peace on Earth, doing it wrong * Shel Silberstien story * song (One Tin Solider) * HPMOR, vold. in monistary * John Henery * Happens in real life: Max's soccer game (rec league vs. competitive, dads, most kids & parents happier to lose & have fun than win miserably). p.{large} Is it that the task is worth doing or that the secret is worth knowing? 2. And we intuitively know it's wrong h1. Project Euler problem #1 An Alegory in five algorithms h2. The problem description bq. If we list all the natural numbers below 10 that are multiples of 3 or 5, we get 3, 5, 6 and 9. The sum of these multiples is 23. bq. Find the sum of all the multiples of 3 or 5 below 1000.
* for t <- 23..1000000, curl http://projecteuler... * for x <- 1..999, t <- t+x if x mod 3 = 0 or x mod 5 = 0 (in scala) 1 to 999 filter { x => x % 3 == 0 || x % 5 == 0 } sum * a = 3; b = 5; while a < 1000 or b < 1000, if a < b then t <- t+a; a <- a + 3 ... two generators * three sets * You meet a person who can do it in their head... given 1 to n sum = n(n+1)/2 the answer is 3*(1 to 333 sum) + 5*(1 to 199 sum) - 15*(1 to 66 sum) or 3*333*334/2 + 5*199*200/2 - 15*66*67/2 or (999*334 + 199000 - 66*1005)/2 or (333666 + 199000 - 66330) /2 or 466336/2 = 233168
h2. Bonus algorithms * three triangles * asymptotic aprox. 3. Wrong, wrong, wrong h1. The rules of Go h2(center). (with possible minor errors or oversimplifications) * Players take turns placing stones on the intersections of a 19x19 grid * The intersections are connected by lines * A "group" is a maximal connected set of same-coloured stones * A "liberty" is an open space connected to a group * A group with no liberties is captured and removed from the board * You can't play a suicidal move that would result in the caputure of a stone * Capturing favours the attacker * You can't return the board to a previous state h1. Go in a few more minutes * Eyes are holes in a group * Two eyes live (groups of genus >= 2 can not be captured) * Scoring intitally appears to have an element of fungshui to it
!Golf_balls_kallerna.JPG! http://commons.wikimedia.org/wiki/User:Kallerna !cc-by-sa.png!
* Golf balls have a radius of about 2.5cm,
* or 10/4cm,
* so a volume of 4/3 pi (10/4)^3 cm^3
* or pi/3 1000/16 cm^3
* At a packing density of 3/4 that's pi/9 1000/4,
* or 36/pi (about 12) golf balls per liter.
h1. The language "Brainfuck" h2(center). (with possible minor errors or oversimplifications) * A is an array of 30,000 bytes. * I is an integer. * `>` means `I += 1` * `<` means `I -= 1` * `+` means `A[I] += 1` * `-` means `A[I] -= 1` * `.` means `write_byte A[I]` * `,` means `A[I] = read_byte` * `[` means `Jump ahead past the matching ] if A[I] == 0` * `]` means `Jump back to the matching [` {{ ++++++++++[>+++++++>++++++++++>+++>+<<<<-]>++.>+.+++++++..+++.>++.<<+++++++++++++++.>.+++.------.--------.>+.>. }} !xkcd-365-slides.png! (Source: http://xkcd.com/365 ) !cc-by-nc.png! h1. The ACM paper h1. Is this progress? h1. Is the Monte Carlo "solution" even a solution? Supose the problem were attempting to solve was to design a robot that could play golf. To meet this challenge you build a "machine" that consisted of a 30-meter cube of golf balls (thus (30 x 10)^3 = 9 million liters, for 108 million balls, coresponding to the number of playouts considered by a typical monte carlo tree search) with an apropriately sized explosive charge burried near the middle. So you put your cube o' balls at the tee, set the charge off, and there's a fair likelyhood that at least one of the balls will land in the cup. Even if it doesn't, many will land much closer to the cup and you can of course itterate -- after all, that's what golfers do. Suppose you were silly enough to actually do this. Would any reasonable person contend that in doing so you had built a machine that played golf? That golf analogy is way too flattering though. Suppose there are 150 moves left. Ignoring captures, there are 150! playouts, or roughly 6e+262. Clearly, we aren't going to find the perfect game playout. So what are the odds that the playouts are going to hit upon anything close to what will happen if both players make one of their 10 best moves at each turn? That will reduce the number of possibilities by about 1e140, to 1e122, which looks much better. How much better? Well, for comparison, there are on the order of 1e80 atoms in the observable universe. So suppose you look out in the night sky on a random direction (not necissarily towards a star) and think of an integer number of nannoseconds between 0 and 13 billion years. Go out that far as the photon flies in that direction, and write your initials on the closest atom to that spot. Meanwhile, I'll pick an arbitry atom by some other method, and look to see if it has your initialls on it. Assuming atomic graphiti isn't a widespread but heretofore unrecognized (by humans) blight, my odds of finding your atom on the first shot are roughly 1e42 better than the monte carlo playout's odds of hitting a decent game. So what's going on here? Why does it even seem to work, in the narrow sense of beating humans? There must be some structure that we aren't seeing. One way to find the structure: do the monte carlo search, or even an exhaustive search on smaller cases, and then do it again with a small change to the original state. Build up sets of cases in this way where making a small change changes the answer (or perhaps cases where it doesn't) and then look for patterns in them. Does shifting everything change the answer? Shifting by an even number of positions? Rotating 90 degrees? Making changes inside settled teritory? But we're getting ahead of ourselves. 4. More on the theme of "wrong" (15; 45) Soup-stone theories There is not a Turring Machine in your laptop, and it would be silly to think that there was. Soup stone models of the brain 0. Radiator. Uh, yeah. .bq For the brain, or in creatures without a brain that which corresponds to it, is of all parts of the body the coolest. Therefore, as moisture turned into vapour by the sun’s heat is, when it has ascended to the upper regions, cooled by the coldness of the latter, and becoming condensed, is carried downwards, and turned into water once more; just so the excrementitious evaporation, when carried up by the heat to the region of the brain, is condensed into a ’phlegm’ (which explains why catarrhs are seen to proceed from the head); while that evaporation which is nutrient and not unwholesome, becoming condensed, descends and cools the hot. The tenuity or narrowness of the veins about the brain itself contributes to its being kept cool, and to its not readily admitting the evaporation. This, then, is a sufficient explanation of the cooling which takes place, despite the fact that the evaporation is exceedingly hot. 1. Homoneucli. Isn't even an explanation. 2. Behaviorism, stimuli & response. Does. Not. Work. 3. Generalized learning machine / sponge. Doesn't fit the facts. 4. Penrose quantum magic * All formal systems are either incapable of proving some truths or mistakenly accept some false propositions as true (Godel). * Computers are formal systems, and thus have the same limitation (Turring). * I, Roger Penrose, am a mathematician and do not; given sufficient time & paper I can comprehend the truth of any true proposition and am never wrong (Penrose). * Therefore, I am not a computer (Penrose) * By Occam, you probably aren't either (Penrose) * Quantum systems do some funky stuff that defies logic if you look at the small scale (Schodinger) * Microtubules are really small (De Robertis and Franchi) * Ergo, quantum effects in the microtubules give us the power to transcend Godel's limitation (Penrose) 5. Monte Carlo massively parallel system. We're back to "isn't even an explanation". Why they abandoned "AI" for "ADD" (automated digital drudgery) -- structural decomposition to the "there's a Turring machine in my laptop" level didn't get them as far as they thought it would. 5. So what does right look like? (20; 65) .bq "XML is like violence; if it doesn't solve your problem you aren't using enough of it." Let's go back to our best model, the one that says the brain is a general purpose lerning device, notionally equivelent to a computer with good AI software running on it. We know that there's some special hardware (vision module) but we're assuming that the rest is a general purpose learning/reasoning system. This doesn't work. LAD, and the need for predefined categories (the unlearnabilty of uncategorized stimuli). "Functional analysis is like math; if it doesn't solve your problem you aren't using enough of it." So what other modules might there be? a face recognition module, a spatial relations module, a rigid objects mechanics module, a tool-use module, a fear module, a social-exchange module, an emotion-perception module, a kin oriented motivation module, an effort allocation and recalibration module, a child care module, a social inference module, a friendship module, a semantic-inference module, a grammar acquisition module, a communication-pragmatics module, a theory of mind module, and so on!" (Mithen 1996) h1. Clues from the physical side
!brain_areas.jpg! © 2008 Brain Injury Association of Peel and Halton !brain_connections.jpg! © 2010 by the National Academy of Sciences
Can we probe for the existence of such modules? * Wason selection task You are shown a set of four cards placed on a table, each of which has a number on one side and a solid color on the other. The visible faces of the cards show 3, 8, red and brown. Which card(s) must you turn over in order to test the truth of the proposition that cards with an even number on one face have are red on the other side? You are a chaporone at a student social and see four students. Two are holding glasses and you know that one of them is 19, and the other is 25; the third student has a can of beer and the fourth a can of coke. Which student(s) do you need to check to enforce the rule that no one under 25 may drink alcohol? * What is the smallest integer I such that for all x = I + R, R >= 0, we have 3*sin(x)+1.5^x > 4*cos(x)-x vs. "What's the left most berry that you can get at? So what does this have to do with go? The fact that people can learn to play go at all means that there are Likely basic modules being engaged (map making, yours vs. mine, escape vs. capture, estimating completion times, etc.) These would be REALLY GOOD TO LEARN ABOUT. 6. The real goal of AI (15; 80) "AI is what computers can't do yet" That's because "AI is the study of things we know are possible (because people can do them) but we don't know how to write code for yet" It's important because the tasks are useful and we don't have good solutions for them. Watts used by brain ~10 Watts used by monte carlo cluster: ~40,000 Watts used by a single server: ~400 Watts used by a single modern CPU: ~100 (excluding memory, I/O, etc.) Watts used by a single low-power CPU (Atom): ~10 (excluding memory, I/O, etc.) Commonplace assertion: "AI programs can beat humans at chess, and are close to beating them at go" My claim: "Good humans can still beat the best AIs, but golfball bombs can "beat" them." h1. The ornithopter objection h1. The kludge objection What we got from chess & related games minmax, planning w. good heuristics tree pruning frame problem(s) generalization & clarity (but not a good solution) What is frame problem Three problems: 1st order logic, general planning (), perception/learning (cf. perception is cognition) Tie back to LAD & to list of bird features vs. flight & how many true statements about birds we didn't consider Blackboard model / GA for experts Why these fail for go No good evaluation function Renomalization needed? Not insoluble, e.g. physics & geshtalt & Non-monotonic Based on what isn't there (star defined by blobs) Long term consequences / frame problem writ large Highly branching, bushy tree Interestingly, these are all aspects of important real world problems as well. (The real world cases have the additional complication that we can't do the detailed, full world simulations that the monte carlo solutions rely on, so we really do need a solution). Avenues to explore Giveaway go Small boards, generalize up Conventional wisdom: "It doesn't generalize (esp even/odd)" Clearly not true -- humans do it, don't see as different games Predict interesting areas (eye tracking) Model w. finite state machine Procedural Learning with GAs (digression) The necessary properties of a genotype -> phenotype mapping; why there are no wheeled creatures Asking these sorts of questions can teach you a lot about internal coding. Example: hamming distance between ASCII codes vs. phonemes (b/v, s/z, d/j, etc.) Math-function space.