The Twixt Bot TWIXT PP

53 replies. Last post: 2018-12-19

Reply to this topic Return to forum

The Twixt Bot
  • shimanayoku at 2008-09-26

    Does anyone know anything about the Twixt bot? It is playing in rating tournaments but is not in the waiting room.

    It takes a long time to make moves – is this because it is very weak and needs a long time to think?

  • David J Bush ★ at 2008-09-26

    Looking at its info page, most of the games it is currently playing are unrated. Maybe these are from individual challenges. It's in one rating tourney, rt.2008.7.100. Based on the few moves it has made so far, it doesn't appear totally clueless but it does tend to get caught up in a local battle, ignoring what's going on over the whole board. Kevin Moesker, is this your AI?

  • shimanayoku at 2008-09-26

    Yeah, those rated games are presumably from the brief time it was available for challenge in the waiting room, but it's not there any more for some reason.

  • jds at 2008-09-26

    I guessed that the author is just testing it by feeding it by hand, explaining the move speed and the fact that it just popped up briefly and is now “full”. :-)

  • shimanayoku at 2008-09-27

    You're surely right, jds. At first I assumed it was just like the Golem bot etc, and had been set to think for hours on end :).

    The author is probably interested in seeing how his program does in the field.

  • shimanayoku at 2008-09-27

    Correction: “… and had been set …” should read “… but had been set …”

  • jds at 2008-09-27

    I think game #942077 illustrates best how the test is going. :-)

    I'd love to see some of the recent computer-go techniques in applied to twixt. I imagine a Monte-Carlo twixt player with a good opening book and 100 cpu cores to run on would play decently.

  • shimanayoku at 2008-11-04

    The twixt bot came back. I think it had improved a fair bit.

  • Wayne Magor at 2009-03-08

    I laughed and laughed at jds's comment about game #942077. In all fairness, the bot was rated just 1500 at the time of that game.

    I just played TwixtBot in 2 games, but they weren't interesting games. What was impressive was that the bot had a rating of 1608 before playing me. How in the world did it get a rating that high? That's better than about 75% of the rated players on this site. The strange thing is that it didn't seem to play at the level of a 1608 human player. It must be quite inconsistent in its play.

    The games were:

    #1012366

    #1012375

    As you can see from the 2 games, I pretty much made a straight connection from one end of the board to the other. In one game there wasn't a single outlying peg, in the other there was only one outlying peg and that was the first peg placed which was swapped. Without an improvement in its strategy, I think it has topped out in its rating.

    I'd love to see the code for this computer player. It may have some potential. I'm sure it's not easy to develop a good Twixt player. Anyone know who's working on it?

  • jds at 2009-03-09

    I thought its wins against 1600-level players showed a least a bit of grist:

    992412 and

    995094.

    No doubt the human players weren't taking it too seriously, but this explains how its rating got above 1600.

  • Robert Irvin at 2009-04-08

    Has anyone considered that maybe it's not really a computer program? It's play doesn't seem consistent enough to be a program and it doesn't make moves at regular intervals like I would expect a computer to. Maybe it's possible that a player named himself Twixtbot. Besides, no one has claimed responsibility for creating it to the best of my knowledge; this also seems unusual to me if someone did indeed program it to play Twixt.

  • Kd Hoffmann at 2009-04-08

    I think it's just a private software project w/o any interface to the net. The info states “This account is for my computer AI Twixt Player. Version 7.0 now!”

    I am just playing a match against it (him/her?)!

  • TwixtBot at 2009-04-09

    Hi everyone, I think I now have enough street cred to be allowed to speak. So let me answer questions that folks have:

    Q. It takes a long time to make moves – is this because it is very weak and needs a long time to think?

    A. Yes. I have been trying to give the bot 15 minutes of thinking time for each move, which only brings it to 1600 as you can see.

    Q. Kevin Moesker, is this your AI?

    Q. Anyone know who's working on it?

    A. No. My name is Jordan Lampe, but I don't have an account here (well, I do, but I don't play on it).

    Q. Has anyone considered that maybe it's not really a computer program?

    A. I cannot prove it easily, but it is in fact an actual computer program.

    Q. I think it's just a private software project w/o any interface to the net.

    A. Actually, I have a script which automatically looks every five minutes for new moves to make. So between that and the 15 minute thinking time, it should take 20-25 minutes to reply to a move. Move if there are many players going at once: I only allow it to think about 4 moves at a time (I have an 8-core machine but usually some of the other cores are being used for something or other).

  • Robert Irvin at 2009-04-09

    Excellent. Thank you, Mr. Lampe. You are doing a fine job and I believe you have created the strongest computer twixt playing AI out there.

  • Letstry_Laurent at 2018-11-28

    Twixtbot is back and this 2018 version is very impressive. It has very few defeats, already has 2 real victories against over 2000 players and is currently doing very well in a game against David J Bush !

    Jordan, can you tell us a bit more (than on twixtbot player main page) about this new version ?

    Thanks.

  • TwixtBot at 2018-11-28

    Sure.  But there's lots of stuff I could blather on about… What would you like to know?

  • David J Bush ★ at 2018-11-28

    I lost. I believe I made a big error in the endgame. I welcome discussion on the analysis page.

  • Letstry_Laurent at 2018-11-29

    Hi Jordan,

    first of all it would be interesting to know a bit about the history of this project which is apprently not a new one but has been greatly improved recently.

    Are you working in the research area or is it a kind of hobby ?

    Then I'd be interested by some more “technical” details. I probably don't understand very well the point about 50 000 tries per move (on Twixtbot main page) :

    If you had to take into consideration only your next move, there is about 500 possibilities, much less that 50 000, but if you want to take in consideration also your opponent's response move, the number of possibilities is about 500 x 500 = 250 000 not covered by 50 000 trials.

    A way to reduce this exponential “complexity” could be to first apply an algorithm to remove some inconsistent moves (for example, at least at the beginning of a game, playing behind the border lines could be considered as inconsistent so not taken intoconsideration). I'm not sure you use this kind of algorithm, or if so that you remove a lot of possible moves (for example twixtbot move 9.v4 in the game #2010910 against David J Bush seems quite unusual to me - not sure I would have considered it with my limited humain brain…).

    Anyway the way I understand this 50 000 trials per move is that the “depth” (number of moves played alternatively by the players) calculated by the bot is limited to 1 or 2. If this is right, that means that you use an algorithm to determine the strengh of your position on the board to define which move is the best. I would be interested to have a rough idea of how you calculate it.

    Sorry if the message is not very clear, english is not my native language and the subject is not so simple…

    Bye.

    Laurent.

  • TwixtBot at 2018-11-29

    It's  a hobby for me.  I always thought Twixt was a game sort of like Go from the point of AI:  a big board with lots of moves to choose from combined with sharp tactics.  The old Twixt bot used the techniques that worked for Go computers back then, and the new one uses the new tricks.

    The new techniques give us a neural net which gives 24*22 + 1 output numbers.  The 1 is a score from -1 to +1 telling us who we think is winning and by how much.  The other 24*22 outputs are the “policy head”, which estimate how likely each move is to be the best.  For example, after 18.g5, the neural net evaluates the position as +0.373 and the top 10 moves are:

    f16  66.74

    m17  21.11

    e14   9.71

    h15   1.33

     t5   0.28

    l17   0.24

    j16   0.13

    n17   0.10

    k17   0.07

    r15   0.04

    f16 is not actually a very good move in this position, but the point is to notice that the probabilities drop off quite quickly.  There are only a handful of moves ever worth looking at.  This is how the bot is able to look enough moves in advance to see that 19.f1620.h1721.g1422.h1323.h1224.i11 ends up being a bad position.

    50,000 refers to the total number of neural net evaluations performed during the search.  It builds up a game tree, adding one node for each net eval.  It decides where to add the new node by making a tradeoff between looking at good moves that it has already looked at a lot vs. less good looking moves that it hasn't looked at very much (and therefore has more “room to improve”)

  • MisterCat ★ at 2018-11-29

    Far as I understand AlphaGo - AlphaZero techniques involve playing many games internally - thousands, millions, billions, and tabulating the results. The computer chooses the move that shows the most wins.

    This type of 'thinking' could not be done by human beings; the only 'technique' to be cleaned here would be observing and learning from the moves chosen - frequently not even considered by human players.

    A similar strategy was used by Doctor Strange in Avengers - Infinity War'; he observes millions of future outcomes, and chooses the only path where The Avengers beat Thanos; presumably.

    (meow)

  • TwixtBot at 2018-11-29

    Alpha Go and Twixt Bot are both playing millions of games to train their respective neural nets, but when it comes to playing someone, the number of positions examined comes down to the thousands.

  • David J Bush ★ at 2018-12-06

    What does everyone (of course including Richard M.) think about adjusting TwixtBot's rating to at least 2200? That would be an artificial adjustment, but this is an artificial player. It would give highly rated humans more incentive (or less disincentive) to play rated games against it.

  • MisterCat ★ at 2018-12-06

    I think that the bot should work it's way up to a legit 2200, and then you can play it; or challenge it to unrated games.  Jordan seems to be taking care of entering it regularly, so this should not take all that long. meow.

    Happy Hanukkah!

  • _syLph_ at 2018-12-06

    well, its strength is clearly not that of a 1800 player as its rating suggests but more closesly around 2400 or so. The rating change for anyone playing it just sucks, its not really a matter of time. As long as it will get up there it will have taken the rating points from someone. So i don't see a reason not to raise it.

  • technolion at 2018-12-07

    I just lost against TwixtBot: http://www.littlegolem.net/jsp/game/game.jsp?gid=2018960

    The game was unrated luckily for me, but it would be good, if the Bot would play rated games as well.

    I am impressed by it's strength!

  • TwixtBot at 2018-12-07

    I'm signed up for many tournaments.  You think I should sign up for the regular side rated games as well?

  • technolion at 2018-12-07

    Yes, definitely. I would find it very interesting to see, at which level the AI currently is, when playing real people.

  • Letstry_Laurent at 2018-12-08

    Thank you Jordan for the explanations.

    Regarding the rating of Twixtbot I think as MisterCat that the AI has to win it's rating points. This is how the rating system is built.

    I remember quite the same discussion about some good players (level around 2000 points) that regularly let their rating go down till 1500 or even less and then raise again and so on… It's not fair when you play against them when they're 1500 because their real level is much higher, but this is how the system is built (with a finite number of rating points equal to 1500 * nb-of-registered-players, no more points, no less points…).

  • Christian0 at 2018-12-08

    Can it play on 30x30?

  • TwixtBot at 2018-12-08

    Nope, it can only play 24x24 and it can only play TwixtPP.

  • Alan Hensel at 2018-12-09

    Well, I hope 30x30 TwixtBot happens someday. 24x24 TwixtBot is already fascinating; another size might teach us a few more things.

    I think it would be interesting to see a table of best opening moves for 24x24 according to TwixtBot, and it would be interesting to compare that to a 30x30 table of best opening moves.

    I noticed that the line between swappable and not-swappable first moves seems to be farther from the edge for TwixtBot than we humans seem to think it is. Maybe this is because TwixtBot knows stronger 2nd moves? It would be interesting to have 2nd moves noted in the table, as well.

    What if you could get TwixtBot, on request, to post a post-mortem of an entire game, in mostly understandable English, to Commentator? It would be interesting to know where TwixtBot thinks the human screwed up.

    I remember someone once did some stats on thousands of LG games to determine if there was positional advantage on a Twixt board (in order to give advice like “play in the center, like Chess” or “play in the corners, like Othello”). What he found was no statistical advantage anywhere. But that was with human players. Does TwixtBot, over the course of millions of games, find any spots of statistical positional advantage anywhere on the board?

    For the purpose of game design, it might be of most general interest to have versions of TwixtBot dialed back to Easy, Medium, and Hard levels, maybe fixed to a converge on a specific rating level or percentile on the site. (Of course, if you do this, definitely please keep the Crushing level around!)

  • Alan Hensel at 2018-12-09

    I wonder, what percentage of TwixtBot's games against itself end in a draw?

  • Letstry_Laurent at 2018-12-09

    I like that question very much Alan !

    and that drives me to another question : in a game of Twixtbot against himself when there is no draw, who wins  and who loose ? ;-)

    That reminds me Dr B. In “The Royal Game” from Stefan Zweig…

  • David J Bush ★ at 2018-12-09

    Uh oh.

    Take a look at this game.I just proposed a draw, and TB rejected it. It's not rated, so I could just resign, but I would like the draw on the books for my pride's sake. The creator, whose handle on the analysis site it BonyJordan, said its strength might be increasing by 20 Elo points per day. So this might be the best result I will ever get against it. And there's an additional problem.

    I may have fallen into TB's final trap when I played A4. Maybe I should have played B5 instead. Now, one of my border holes has been occupied. It is my move, and there are an even number of vacant holes remaining in the common playing area. That implies TB will be able to make the last move there. Then I will have to play again in one of my border rows. At that point, it will be TB's move, but it will have 44 reserve holes to play in whereas I will have just 42. So, after TB makes move 571, I will face a loss on time! Whereas, Had I played B5, TB would have been looking at a loss on time.

    It seems a pity that we can't do analysis on Alan's site just yet. For example, I'm not sure why TB went for the draw with T6 when it looks like it could have gone for a clean win with U5.

  • Alan Hensel at 2018-12-09

    Wow. If TwixtBot won't accept a draw, how the heck did this 1-move game against mmKALLL on Nov 1 end in a draw?

  • _syLph_ at 2018-12-09

    on the bright side you may be the last human to survive the bot for over 500 moves

  • TwixtBot at 2018-12-09

    The original draw code said: “accept a draw if 1. losing or 2. neither side can win under any circumstances”.  It turns out TB is a bit pessimistic about its chances early on in the game, so that's where the one move draw came from.  I now require at least 10 moves before accepting a draw.

    In the David Bush game, although White cannot possibly connect, Black still can, so that is why it hasn't been accepting that draw.  I changed it so that it only cares about its own side being able to possibly connect, so that will get that game out of the way.  Incidentally, the neural net thinks it is handily winning that game.

    In self-play, very few games end up in draws.  I haven't measured, but it seems similar to human draw rates - around 1% or so?  This may explain how the net can think it is winning by so much: it doesn't have many draws in its training set so it hasn't learned much about draws.

    The first and swap move are not chosen by the neural net.  Instead what I did was played the bot a few dozen times against itself with each possible start move, and then blended the results to get an estimate of win % for each move.  From those estimates, I randomly pick a move except weighted to favor moves nearer 50%.  So both b2 and l12 have non-zero probability although don't expect either one often.  For the swap move, I look at the estimate and swap if my side is below 50%.  From this little study, it appears that the horizontal coordinate hardly matters and the vertical coordinate is the key indicator of how good the first move is for white.  This was a few generations ago on the net so maybe the story has changed since then.

    P.S. I got a complaint about having a bot in rated games.

  • ypercube at 2018-12-10

    Complaint? There have been bots playing in Littlegolem for many years. Many of them played at the first level of championships and a few even won championships.

    If anything, a Twixt bot was missing ;)

  • TwixtBot at 2018-12-10

    I think the complaint was related to the “rated games” not about the tournament games.

  • David J Bush ★ at 2018-12-10

    On LG there are less than 0.1% games that end in drawn positions, although considerably more than that are agreed draws.

    Thanks for fixing the bug! So, now the machine will no longer regard a draw as a win. My last slim hope has been closed off I guess.

    The bot plays very quickly. Are there plans to make it available in real time on iggamecenter.com or boardspace.net? both those servers use standard rules with link removal, not PP rules. I also hope it learns to play 30x30.

  • TwixtBot at 2018-12-10

    As noted above, the bot only knows TwixtPP and 24x24.  Like most AIs, you can decide how long you want the bot to spend thinking.  I rather arbitrarily chose to aim for about 5 minutes thinking time per move.

    To change the bot to play 30x30 is a trivial code change (basically the number 24 occurs exactly once in the code, plus a little bit for dealing with x-coordinates > 26), but you probably need a few more months to train on self-play.  Some of the Go guys have noted that a 19x19 trained Go neural net plays pretty well on a 9x9 board, so you might get a good seed from the 24x24 Twixt neural net.

    To change to deal with link removal is a bit more trying; just enumerating the list of possible “moves” is tricky.  You might be able to hack it by splitting the move into a peg chooser and then a link chooser, but you might also lose a fair amount of strength.  I don't know.

    In any case, while I'm not opposed to putting it up for “real time play” per se, I don't actually have a program capable of making intelligent Twixt moves for the classic rule set those two sites use.

  • David J Bush ★ at 2018-12-11

    With regard to standard rulex, maybe it would help if, in situations where link removal might happen, you enumerate all possible winning paths “through the gauntlet"which do not cross over themselves, and list a possible move only if it helps to achieve one of those paths. You might also be interested in this BGG thread.

  • Alan Hensel at 2018-12-12

    Whoa, Twixt Championship 52 is Size 30?

    And TwixtBot is in it?

  • David J Bush ★ at 2018-12-12

    I don't see the bot listed in the championship. I started a different thread for that tourney.

  • TwixtBot at 2018-12-12

    In tier 2 http://littlegolem.net/jsp/tournament/tournament.jsp?trnid=twixt.ch.52.2.2 but as soon as someone makes a move past x or 24 that'll crash the bot, so I guess we'll be resigning a bunch of games.

  • ypercube at 2018-12-13

    You can resign in the first move and you won't lose any rating points.

  • David J Bush ★ at 2018-12-13

    Oh yeah! Please resign on your first move. We con't want the bot to lose rating points. Oh, no.

  • Alan Hensel at 2018-12-18

    Congrats to Florian for beating TwixtBot!

    Its first defeat to a human in 37 days! (besides David's draw)

    #2021134

    an amazing game!

  • Letstry_Laurent at 2018-12-18

    Well done Florian !?

    HUMANS ! HUMANS ! ?

  • _syLph_ at 2018-12-18

    TwixtBot seems to have some flaw in the opening.

  • David J Bush ★ at 2018-12-18

    Well done Florian! You make it look easy.

  • Florian Jamain at 2018-12-18

    Gonna analyze a little this on the commentator, maybe the Bot can play some variations and tell us what is his thought.

  • mmKALLL ★ at 2018-12-19

    Amazing match Florian, congratulations!! I learnt a lot by going through the moves, just seeing this game made TwixT much more fun for me. :)

Return to forum

Reply to this topic