I was really pleased with the way my two draft experiments worked out, and I learned quite a bit about the format from both studying the data and trying to apply the results to actual drafts.
I wanted to try a postmortem draft where I would keep in mind some of the shifting evaluations I had as a result of the data article and the two draft experiments.
Pack 1 pick 1:
I've been playing competitive Magic since 2003. I have played in 34 Pro Tours.
At one point, this blog only contained recaps of my drafts so that I could discuss them with other people to improve my limited skills.
Now it's mostly a place to share my thoughts and ideas about competitive Magic in general.
I'd like to extend a personal thank you to anyone who takes the time to read my posts and offer opinions.
Thursday, November 12, 2015
Wednesday, November 11, 2015
BFZ #7 the experiment, part II
This is a second attempt at the experiment I proposed in this blog post - I'm going to do a BFZ draft, but for the first handful of picks, I will only be taking the card with the highest win rate, as outlined in this article on MTGGoldfish.
Pack 1 pick 1:
BFZ #6 - the experiment
Recently I read an article about Battle for Zendikar draft on MTG Goldfish that I loved:
http://www.mtggoldfish.com/articles/83k-games-of-battle-for-zendikar-limited-analyzed
Basically, the author scraped 83,000 replays of BFZ draft games and analyzed all the data, including how long the games are on average, and what each individual card's win percentage was.
Since I'm a statistician and also a giant nerd, I can't get enough of these types of articles and analyses. After thinking about it some, I decided to try an experiment. I am going to draft a deck and, for the first handful of picks, I'm only going to select the card with the highest win percentage.
This method is similar to the Ben Stark method of drafting - spend the first few picks of each draft taking the strongest card, and then settle into whatever colors happen to be open. Gerry Thompson also had an article about BFZ draft on SCG (today, actually) that mentioned this method. The driving principle is that if you settle into what's open, you might give up some really strong first few picks because those colors dry up, but more often than not you'll end up with a strong deck on average since you remain flexible. This is basically an implementation of that method, except my picks are influenced not by what cards I *think* are strongest - they are influenced by cold, hard data.
Some side notes: Cards like Swarm Surge and Inspired Charge have highly biased win rates, since they usually aren't cast until the turn that they win the game. A similar phenomenon can be found with Dispel - its win rate is very high, but it's only seen in about 1% of analyzed games, so what's happening is that it's being boarded in only when it's most effective, and it tends to hit hard in those scenarios. For the purpose of this exercise, I'll be ignoring Swarm Surge, Inspired Charge, and Dispel as early picks - but anything else is fair game.
(Also, if you read the article, and I hope you did, I'll be using Likely Win % as the first criteria, with Win % as a tiebreaker if necessary.)
Pack 1 pick 1:
http://www.mtggoldfish.com/articles/83k-games-of-battle-for-zendikar-limited-analyzed
Basically, the author scraped 83,000 replays of BFZ draft games and analyzed all the data, including how long the games are on average, and what each individual card's win percentage was.
Since I'm a statistician and also a giant nerd, I can't get enough of these types of articles and analyses. After thinking about it some, I decided to try an experiment. I am going to draft a deck and, for the first handful of picks, I'm only going to select the card with the highest win percentage.
This method is similar to the Ben Stark method of drafting - spend the first few picks of each draft taking the strongest card, and then settle into whatever colors happen to be open. Gerry Thompson also had an article about BFZ draft on SCG (today, actually) that mentioned this method. The driving principle is that if you settle into what's open, you might give up some really strong first few picks because those colors dry up, but more often than not you'll end up with a strong deck on average since you remain flexible. This is basically an implementation of that method, except my picks are influenced not by what cards I *think* are strongest - they are influenced by cold, hard data.
Some side notes: Cards like Swarm Surge and Inspired Charge have highly biased win rates, since they usually aren't cast until the turn that they win the game. A similar phenomenon can be found with Dispel - its win rate is very high, but it's only seen in about 1% of analyzed games, so what's happening is that it's being boarded in only when it's most effective, and it tends to hit hard in those scenarios. For the purpose of this exercise, I'll be ignoring Swarm Surge, Inspired Charge, and Dispel as early picks - but anything else is fair game.
(Also, if you read the article, and I hope you did, I'll be using Likely Win % as the first criteria, with Win % as a tiebreaker if necessary.)
Pack 1 pick 1:
Monday, November 9, 2015
Subscribe to:
Posts (Atom)