Tuesday, September 27, 2016

Genetic Algorithms

An interesting application of heuristics is something called a genetic algorithm, even though we have yet to talk about these in class (and probably will not), the topic is still very interesting.

It belongs to the evolutionary family of algorithms; meaning that as this algorithm runs it has continuous changes. The way a genetic algorithm works is by having an initial population of possible solutions and each can be represented as a "genome" (this allows for change of the organism). This population also needs to have a way to be evaluated for “fitness” of the genome when compared to the solution. The final ingredient is allowing the population to have a way to change, be it randomly or through interactions. Then by using the idea of mutation on the entire population, you can force evolution of the potential solutions. The population is subject to change iteration by iteration as the algorithm runs. On every new iteration the algorithm attempts to produce a new population that is (possibly) a better fit to the solution. Eventually, if all goes well there will be a solution found deemed to be acceptable set by the algorithms tolerance level. Or, you could simply run out of computation time.

This is particularly good for problems where one might have no idea in what part of the domain the optimal (or acceptable) solution may belong to. By using an initial stochastic distribution of the population throughout a search space, through evolution the algorithm has a higher chance of converging to a local minima/maixima in the solution. It should be noted that this technique is best used when a optimal solution isn’t needed, only one that is good enough to the user. In many cases like that it might be impossible to brute force an optimal solution, or simply take too much time.

Some of the downsides to this method is that the structure of the population and the evaluation method have to be defined by the programmer. Therefore any bias is hardcoded in from the start, and may be missing any crucial connections that weren’t known at the time. Of course, this problem seems to be present in all forms of research and design. Another problem of notice is that they tend to be used in discrete solution sets, looking for a solution. Which is a problem of setting up a finite domain for the solution, some things may not be well formed for this.

This class of algorithm sees great use everywhere, even though some new methods of thinking in evolutionary techniques have risen to prominence over the last decade. It still continues to work very well for many problems. Because, in this algorithms heart it is a optimization technique, so any problem well posed as a function is suited for genetic algorithms. It excels at things like scheduling tasks and financial analysis. It just has a much cooler sounding name then multivariable-optimization.

I think an interesting idea for a final project might be implementing a genetic algorithm. Finding an interesting problem to implement it on is another problem though.

Monday, September 26, 2016

Adversarial Search

Adversarial Search

Adversarial search, or Minimax search, is an algorithm to try to result with the best outcome of a situation while also accounting for the actions of an opponent. It is similar to other simpler search algorithms in that it creates a tree of the possible states and tries to walk down it. At every other level of the tree, the algorithm must account for the opponent. In most cases when the opponent is choosing, you expect the opponent to choose the worst outcome for you and the best outcome for them. This search has many uses and is applied in many games.
In games where minimax is applicable there can only be two players. A good example of a game for this is the game of checkers. It is a simple game so it is possible for a computer to compute all the possible moves in the game. If played correctly, the game is actually impossible to lose. A minimax algorithm is a great algorithm to use for this game. You can quantify the goals of the game. For example, if you win the game, you get infinite points. If you lose the game, you lose infinity points. If you lose a piece, you lose a point. If the opponent loses a piece, you gain a point. The minimax algorithm can then look down the tree to see where the possible moves are that would give it the most points.
We can assume that the opponent wants us to get the least number of points possible as well. To apply this to checkers for example, we wouldn’t want to move our piece in front of the opponent’s piece. This would allow for the opponent to immediately jump and remove our piece, decreasing our number of points.
There is a slight oddity of behavior though when encountered with a board that is impossible to win. Say you only have one piece left that is in a corner. The entire rest of the board, save a few empty spaces around you, are your opponent’s pieces. The algorithm won’t choose to fight as long as possible. It will just accept defeat and choose any path that might or might not lead straight towards your loss. This might not really matter because you’ve already analyzed every possible play on the board and have concluded that there is no way for you to win. But it is an interesting habit nonetheless. 

Monday, September 19, 2016

StarCraft AI Bots

One of the most interesting things that was mentioned in class last week was an AI competition; the AAIDE Starcraft AI Competition hosted by the University of Alberta. It sounded pretty cool to me, so I decided to check out some of the starter code that was posted for the competition. ( They give you some starter code for your bot, if you choose to use it.) There is a huge amount of what would seem to miscellaneous code to the non-StarCraft player. Things like managing workers who are building something, or making sure a proper amount of resources are being obtained for the player. Inside baseball knowledge seems to be necessary to pick out what is important and what isn’t. Perhaps the most entertaining part of the misceaallenous tasks to be encoded is that the dynamics for each race are different, even when building things.

One would assumes battle strategy has to differ greatly due to the fact each race has different types of army, but by extension everything needs to be re-coded for the three races. So it might be more accurate to say your are programming three AI’s instead of just one. While one build-order heuristic for one race might be great ( when to build what type of unit), it is completely useless for another race, because they don’t share any units or buildings.

 Another implication brought about by the management of the workers is that there is a huge amount of information that needs to be checked continuously about the game state. This is to the advantage of the computer as opposed to the human, seeing as computers excel at completing sequential trivial tasks. A little research reveals that the top Korean pro players have an APM (actions per minute) of around 300-400. I couldn’t find a breakdown as to how much of that is spent on infrastructure as opposed to actual battles. Amateurs seem to be all over the place in APM, but breaking 75 seems to be amazing. So the computer has a massive advantage here. Just based off this information about the limits of humans, it seems that computers would always be able to win.

But in fact, no. The top human players are still able to beat top AI’s consistently. In fact it seemed to be a complaint among better players that the AI that shipped with the game was laughably bad. Too predictable and easy to manipulate. These AI projects do not have the massive amounts of resources behind like Alpha Go does, but the fact remains real time strategy games are a much more complex problem.

My conclusion after reading through the starter code and documentation, is that a large amount of work goes into every bot to be programmed. Not only that — domain knowledge is crucial in this. It seems to me that there is a very small subset of people who would be able to successfully program this AI bot by themselves. It requires you to be an expert in the video game of StarCraft, as well as being well versed in AI techniques and algorithms. This seems to be a common thread when it comes to implementing most of the algorithms covered in class, deep domain knowledge is needed. 

So yes, in this case you could play video games for school work, in order to refine your domain knowledge.

For the actual meat of the AI algorithms used, check this page.


https://github.com/davechurchill/ualbertabot/wiki/Artificial-Intelligence

Tuesday, September 13, 2016

Thinking of better heuristics


I thought that the most challenging part of this weeks assignment was writing a good heuristic, after all it seems to be something that can always be improved upon. Furthermore, when designing a heuristic for a more complicated problem one might work better form of the problem, but then perform poorly on the same problem when another variable is introduced. The key to writing a good heuristic is to simplify the problem before analyzing it, but what to do when someone doesn't have a simple problem. Most problems can be represented in a way were exhaustible search is not a feasible option. So how can we improve?

Well - one idea is a hyper-heuristic, a sort of multi-level heuristic. The idea is that by having a set of possible heuristics you can fine tune. Instead of writing one mega-heuristic that tries to jam pack all possible information about the state into one, bundle them into smaller single heuristics. Similar to a multi-variable optimization problem, there could never be a solution (h*) found for the problem, but it can be approximated as much as possible. Instead of traditional numerical analysis methods being used to find the "roots", they instead use machine learning to tune the heuristic being made. By mixing and matching different type of heuristics that measure several aspects of the problem, you can hopefully build a more robust heuristic.

This is an intriguing approach to building a heuristic, instead of having to scrap all of the existing values, you can simply add the new heuristic to the hyper-heuristic. Then as your algorithm runs, the heuristic will tune itself to accommodate the added heuristic. Of course, this is assuming that the problem needs to be run many times (or their is time in between solutions for training); there would be no benefit for forming a set of heuristics for a solution that is trivial. Conversely, there is probably great value in having a set of heuristics that has been trained/tuned over a long period of time. If you have a hyper heuristic that works extremely well on a problem, there is a good chance it can re-purposed for other problems in the same set.

Considered to be closely related are heuristics that generate other heuristics. While the efficacy of those might be the same or more, as the domain expert you might lose out in terms of insight as to what is driving the solution. As one might want to try to re-weight the "equation" to see what difference it makes, if only for their own knowledge. The practical value in having insight into what is the major hang-up points in a solution can be more valuable then having a planner that silently works around them every time.

Monday, September 12, 2016

AI Pathfinding in Games

When it comes to having units in games make their way from one waypoint to another, there are many different approaches for finding the optimal route. A* is definitely the most popular pathing algorithm that developers get to choose from, but the problem is slightly more complicated than that. A* is still a fairly generic algorithm and game developers still need to tune the algorithm to work best with what their game does. Every developer has their own different game and different requirements. A* is still the best starting point and is implemented in many games.

Take a game like Civilization V for example. Civ V is a turn-based strategy game. It requires you to move units around a map made up of a hexagonal grid. Players move one unit at a time and units can’t collide. An algorithm like A* would work very well in this situation because the map is always static from the point of view of the single unit being moved. That unit could easily try to map a way through a hexagonal grid towards the goal, which would be the location the player wanted the unit to move to. This implementation works well because the world is static, though. Games where everything is always moving would require a different algorithm.

Blizzard’s real-time strategy game, Starcraft, is a good example of a dynamic map. In Starcraft, players move units around the map nonstop, multiple players moving units at the same time. If a player tells a unit to move across the map, the developer could continue to use the A* algorithm to move the unit across the map. But that would require the A* algorithm to compute an entirely new path whenever the unit encounters a new obstacle that it didn’t know of before. That would be quite inefficient and there are better algorithms to do this. Enter “Dynamic A*”, or D* for short. D* is much better suited for dynamic maps because it can change the cost of an arc as the algorithm runs. If it encounters an obstacle, it can dynamically search for other options to get to the goal.


In short, developers can use whatever algorithms they want to use for their AI pathfinding. A* is the gold standard for static pathfinding. Developers can customize A* to be what they need. D* works very well for dynamic solutions and maps that are constantly changing because it can recalculate the best path as it receives new information. 

Monday, September 5, 2016

Natural Language Generation Final Project Idea

We haven’t gotten into any Natural Language Processing (and wouldn’t until the end of this class - if we do), but I wanted to talk about it anyways.

For my project, I was thinking of doing something along the lines of Natural Language Generation with a functional language, namely Clojure. I feel like the expressive tendencies of functional languages fit well with human languages, and this could be something interesting to explore.

What is Natural Language Generation? Well, to start it is a sub-field of Natural Language Processing, but where NLP (as a whole) is concerned with understand human speech NLG would be concerned with sounding human.  An example of this we have already seen in class would be Siri. It needed to parse out the information from the question which would be the understanding, but then the response would be the generation aspect of the problem. So, some NLG projects might be getting a robot to narrate what it is doing to a human, or having a kiosk at a mall interact with a customer via generated voice.

More interesting than just having a one to one mapping from actions to phrases, would be having a program that summarizes data. One of the big problems with the large amounts data companies have nowadays, is that it is hard to draw conclusions from raw data, but there is no story that frames the data for the end user. Problems like creating summaries for videos and sound clips are common topics for Computer Scientists to tackle.

So my idea is this: Make a program that takes all the news from an aggregator on one very specific topic, for a period of time, and condense the most important ideas to summarize over one page. I would limit the domain to start with to one very specific topic, with  six months for my time period.

This is mildly ambitious, a problem that most of my ideas suffer from. But, I think this would probably touch upon many areas of this class. To go beyond simple template driven text generation will require lots of filtering along with the intelligence to pick out the important parts. I think though if I can get away with using libraries for some of the heavy lifting in text processing (breaking everything into n-grams) and sentiment analysis, that would let me focus on the text generation part.  The problem would be to choose the right algorithms at this step.

I know people have had success using convolutional neural networks to pick out sentiment from text, and that would give the program enough information to pick out what is important and what isn’t. The only problem is a neural network might become to heavy for this. I was talking to a developer who works for a company that does sentiment analysis APIs and he said that sometimes a simple node making snap decisions can work better. It's tough to tell what to use because I personally don't know enough about this part of the system yet. Frequency of something being mentioned and the sentiment attached to it would probably be the most important details for judging if that piece of information makes it into the summary.

Another challenge for this would be how to avoid keeping the summary from becoming something of a fill in the blanks, instead having a more organic feel. Most of what I have read so far on this topic seems be template based, so there is more research to be done on this topic.

In any case, I think making something from nothing is always interesting. One of my goals in taking this class was to be able to take on projects like this on my own!