evan dekhayser

a place for my thoughts

home
resume
posts

@ERDekhayser
edekhayser
erdekhayser@gmail.com

an app obituary

16 May 2017

After more than 3 years on the store, I decided to take my first large project, Contact Archiver, off the App Store last month.

In reality, I should have taken it down much sooner. Because it manages the user’s contacts, the app needed to be tested and updated for each version of iOS to make sure it was still properly functioning. The negligible revenue it was generating no longer justified the time spent working on it. The app had been neglected since its iOS 10 update, and that wasn’t fair for the users either.

When this was my only app and I was not necessarily in it for the money, I could justify working on it. It helped me improve my skills and knowledge, it gave something interesting for a resume, and it was fun to make. Now that I have other more interesting and financially viable projects lined up, as well as the upcoming costs of college, it doesn’t make sense to keep Contact Archiver around any more.

I have always been a little disappointed in how the app sold anyway. As a paid app, it barely sold more than 50 copies. Changing it to a free app brought the downloads into the thousands, but the in-app purchases I later added made a negligible difference.

It’s a little sad to see Contact Archiver go, but I’m looking forward to being able to devote 100% of my development time toward more viable projects.

disappointment and opportunity

16 Apr 2017

Rejection is never easy. About a month ago, I was rejected from the university I have wanted to attend for the last decade. In the following two weeks, I had similar news from the next 9 schools on my list, all of which I would have been thrilled to attend.

Overall, I handled the news better than I had expected, but it still hurt pretty badly.

I did get into a school that was not originally on my radar. After seeing the school in person, I decided that it will be my home for the next four years. I couldn’t have predicted this outcome two months ago.

While I have heard that I will ultimately end up at the right school for me, I don’t personally believe this kind of logic. I do really like the school I will be attending, but I don’t believe that it is necessarily my school. No school is my school until I am there and make it my own.

I also believe that you only get out of anything what you put in. Whether I did go to one of my higher choices, or if I go to the school I committed to, or if I chose my safety school, I think I could eventually reach the same goals. Sure, the paths would be different, and it may take a different amount of time, and the destinations would be very different. But the main ideas, per se, would be the same.

I was definitely disappointed to see the rejections, but it’s much more productive looking at the other opportunities that are available to me. Instead of thinking about what I am missing, I need to think about everything still in front of me.

my podcast problem

25 Mar 2017

I am currently subscribed to 27 podcasts. Finding time to listen to all these podcasts is a challenge, especially considering that most of these podcasts come out on a weekly schedule.

Overcast makes it somewhat easier to manage, thanks to Smart Speed and the speed adjustments, but still, that is a lot of time to listen to people talk.

Here is my list of podcasts I listen to on a regular basis:

  • 99% Invisible
  • Accidental Tech Podcast
  • Analog(ue)
  • Canvas
  • Core Intuition
  • Clockwise
  • Common Sense with Dan Carlin
  • Connected
  • Cortex
  • Ctrl-Walt-Delete
  • Do By Friday
  • Exponent
  • Face the Nation Diary
  • Fatal Error
  • Free Agents
  • Happy Hour
  • More Than Just Code
  • Reconcilable Differences
  • Release Notes
  • Runtime
  • Stuff You Should Know
  • Swift Unwrapped
  • The Talk Show With John Gruber
  • Thoroughly Considered
  • Under the Radar
  • Ungeniused
  • Upgrade

I have to cut back on which podcasts I listen to, because this is getting out of hand. Most likely, I’ll decide which ones to cut using some of these criteria:

  • Do I remember what this podcast is about without looking at the description? Do I remember who the hosts are? If not, maybe I am not getting as much out of the show as I believed.
  • Is there anything I get from this podcast that I don’t get from any other podcast? No point in listening to redundant content.
  • How much of a time commitment is the podcast? One hour a month is easy to justify. Two hours per week, on the other hand, is less so.
  • How invested am I in the podcast already? If I have only listened to a few episodes of a podcast so far, it would be easier to stop listening than it would be if I listened for a year or two already.

Even though I want to listen to all these podcasts, I need to get down to 20 podcasts so I don’t have to always feel like listening to podcasts is another chore to do before next week’s episodes come out. I enjoy listening to these shows, but when it gets laborious to listen to all of them, you know there’s a problem.

the last 10%

23 Feb 2017

I’ve been working on a project on-and-off for the last few months. Last month, I got to a point where I successfully answered what I think is the most challenging technical question: how to create the AI opponent. Now that the gameplay itself is just about finished, I am down to just a few more last pieces to plug in. I need to add multiplayer support, add a tutorial, and give the UI its finishing touches.

At this point, I lost most of the energy that was pushing me to finish the game. What this has taught me about myself is that I am more motivated by solving a challenging problem than by releasing the final product.

This can be a good thing, sometimes. But if I never release the game, why did I bother putting this much effort into it? I got 90% of the project done, and if I don’t finish the last 10%, was it worth doing at all?

Yes, I learned about how to create a basic computer AI. Yes, I figured out how to make the game engine do what I want. But ultimately, what I have learned can only benefit me if I either release the project and earn money for it or if this knowledge gives me the ability to pursue job or research opportunities in the future. If I don’t release the product, neither of these can happen.

My current problem is that another challenging technical project – that (I believe) has the potential to do so much more than this game – has shifted my motivation away from finishing this game. But while the new idea is still almost entirely in my head, my game is very much real already. I can’t give up on a project that is 90% done to work on another project that is still at 0%.

Part of the reason I am writing this on my blog is to force myself to finish this game, and soon. Finishing the game will allow me to get a sense of completion from it, and will let me work on my next project without the guilt of abandoning this one.

Last month, I had the same sense of excitement about the game as I now have for my new idea. If I can channel just some of that back to the game, I think it can become something I am really proud of.

chess, ai, and gkstrategist

16 Jan 2017

For the past several months, I have been developing a variation of chess using Apple’s GameplayKit. Specifically, I am trying to use GKStrategist to create an AI opponent for single player.

At the moment, GameplayKit comes with two strategists: GKMinMaxStrategist and GKMonteCarloStrategist. Both have their strengths, which the documentation covers well. But the deficiencies of each make them both unusable in a chess game.

This project is meant as a learning experience, so I have been trying to learn exactly what happens under the hood of these two classes. What better way is there to learn that by recreating them?

MonteCarloStrategist

The Monte Carlo strategy focuses on win-loss conditions only, so it was not even worth considering for a game as long and drawn out as chess. It evaluates moves based on how likely they are to lead to a win or a loss. Rather than evaluating every move, it chooses several random moves and sees what happens in the future. If no move is found quickly, it moves onto another move.

Even though this makes no sense for a game like chess (tic-tac-toe would be a more fitting application), I tried it out for the sake of science. For most of the game, the AI makes seemingly random moves (probably because they are random…). The AI finally stops making random moves when you put it into check, after which the king runs for its life.

Because of the complex nature of chess, Monte Carlo is not the right place to look for an AI.

MinMaxStrategist

This was the first strategist made available to developers at WWDC 2015. Minmax is also one of the leading techniques used by chess software, so it is clearly worth strong consideration. Without including memory optimizations Apple’s provided class probably uses, here is the source code of a Minmax strategist I developed for my game:

import GameplayKit

class MinMaxStrategist: NSObject, GKStrategist {

  var gameModel: GKGameModel?
  var randomSource: GKRandomSource?
  var maxDepth: Int = 3
  var activePlayer: GKGameModelPlayer!

  func bestMoveForActivePlayer()  GKGameModelUpdate? {
    activePlayer = gameModel?.activePlayer
    guard let moves = gameModel!.gameModelUpdate(for: activePlayer) as? [Move] else { return nil }
    moves.forEach{ (move) in
      let modelCopy = gameModel!.copy(with: nil) as! GameModel
      modelCopy.apply(move)
      move.value = minmax(model: modelCopy, depth: maxDepth, maximizing: true)
    }
    let movesSortedByStrength = moves.sorted{ (m1, m2)  Bool in
      m1.value > m2.value
    }
    return movesSortedByStrength.first
  }

  func minmax(model: GKGameModel, depth: Int, maximizing: Bool)  Int {
    if depth == 0 || model.isLoss(for: activePlayer) || model.isWin(for: activePlayer) {
      return board.score(for: activePlayer)
    }

    if maximizing {
      var bestValue = Int.min
      for move in model.gameModelUpdates(for: model.activePlayer!) as! [Move] {
        let modelCopy = model.copy(with: nil) as! Board
        modelCopy.apply(move)
        let value = minmax(model: modelCopy, depth: depth - 1, maximizing: false)
        bestValue = max(bestValue, v)
      }
      return bestValue
    } else {
      var bestValue = Int.max
      for move in model.gameModelUpdates(for: model.activePlayer!) as! [Move] {
        let modelCopy = model.copy(with: nil) as! Board
        modelCopy.apply(move)
        let value = minmax(model: modelCopy, depth: depth - 1, maximizing: true)
        bestValue = min(bestValue, v)
      }
      return bestValue
    }
  }
}

Minmax assumes that for every move, each player will do what they can to improve their chance of success. It goes through every possible permutation of moves until it reaches a certain depth, and rates each move according to its future consequences (or, if the move is the terminal node, by a scoring function you provide).

By assuming that a player would not intentionally make a harmful move, Minmax recognizes which moves the opponent would take and which the opponent would not. The opponent would not, for example, march their king into enemy lines.

One difficulty with Minmax is providing the aforementioned scoring function. How do you rank a chess board? One approach would be assigning different pieces different values (pawn = 5, knight = 15, queen = 100, etc.) and finding the net value of a board. You can also add a multiplier to where on the board a piece is located: a pawn that is one space away from promotion is worth more than a pawn in its initial position.

Another critical issue with Minmax is that it is exhaustive. The first move of a chess game has 20 moves; the second move of a chess game has 20 moves. Just looking at those two moves, you already have 400 game positions to score (which may not be very efficient itself). Look ahead one more move and you already have upwards of 8,000 moves. One more? Have fun with 160,000 boards. iPhones are fast, but they aren’t that fast.

How can this be improved? Alpha-Beta Pruning.

AlphaBetaStrategist

I do not believe GKMinMaxStrategist uses Alpha-Beta pruning because the documentation makes no mention of it. If I get the chance to attend WWDC 2017, I will definitely reach out to an Apple engineer.

Update: GKMinMaxStrategist likely does use this technique. I timed the class implemented below, and I timed GKMinMaxStrategist, and Apple’s version runs faster. My best guess, they use Alpha-Beta pruning as well as other optimizations. If I can go to WWDC this year, I still will talk to an Apple engineer

The code for the AlphaBetaStrategist is the same as MinMaxStrategist, except for these changes:

move.value = minmax(model: modelCopy, depth: maxDepth, maximizing: true)

becomes

move.value = alphabeta(board: boardCopy, depth: maxDepth, alpha: Int.min, beta: Int.max, maximizing: true)

Also, the minimax function is replaced by the following:

func alphabeta(model: GKGameModel, depth: Int, alpha: Int, beta: Int, maximizing: Bool)  Int {
  var alpha = alpha
  var beta = beta
  if depth == 0 || model.isLoss(for: activePlayer) || model.isWin(for: activePlayer) {
    return board.score(for: activePlayer)
  }

  if maximizingPlayer {
    for move in model.gameModelUpdates(for: model.activePlayer!) as! [Move] {
      let modelCopy = model.copy(with: nil) as! Board
      modelCopy.apply(move)
      let value = alphabeta(model: modelCopy, depth: depth - 1, alpha: alpha, beta: beta, maximizing: false)
      if value > alpha {
        alpha = result
      }
      if alpha >= beta {
        return alpha
      }
    }
    return alpha
  } else {
    for move in model.gameModelUpdates(for: model.activePlayer!) as! [Move] {
      let modelCopy = model.copy(with: nil) as! Board
      modelCopy.apply(move)
      let value = alphabeta(model: modelCopy, depth: depth - 1, alpha: alpha, beta: beta, maximizing: true)
      if result < beta {
        beta = result
      }
      if beta <= alpha{
        return beta
      }
    }
    return beta
  }
}

I was able to understand Alpha-Beta pruning enough to make this code functional, but I don’t think I could effectively explain it to others. I must defer to Wikipedia.

Without going into specifics, this algorithm figures out which branches do not need to be evaluated based on the highest score of previously evaluated branches (alpha) and the lowest score of previously evaluated branches (beta). There are plenty of college resources that will you can look at online (I recommend Cornell or UPenn’s resources, especially).

More?

There is always more. I am currently using the Alpha-Beta strategist in my game, which requires some tweaking to make the scoring function work better. I’m also looking for places to make this algorithm more efficient, as time constraints still force me to only look 2 or 3 moves deep. I would like to be able to move that up to 4 or 5.

Hopefully you were able to take something away from this. This has all been a lot of fun to research.