AI learns to Play Tetris Game: Building the Intelligence

Shashank Goyal
15 min read1 day ago

This article is a continuation of the AI-based intelligence program that learns to play Tetris. In this article, we are going to focus on developing the core intelligence behind the AI Model. Please note that the general logic and solution from this project can be applied to other games and learning-based tasks as well.

The approach that we will be taking uses the Python “NEAT” module which stands for NeuroEvolution of Augmenting Topologies. In simpler terms, it is used to evolve artificial neural networks using Genetic Algorithms. If you want to understand the concept behind this in more detail, I recommend reading the Initial NEAT Paper by Kenneth O. Stanley and Risto Miikkulainen. Additionally, you can also check out other papers that explore this concept and related topics here.

This project has been blogged in two parts, you can find the previous part: AI learns to Play Tetris Game: Building the GUI — which describes the development of the GUI that will be used in the project, here.

Part II — Building the Intelligence

Before moving forward, if you are not comfortable with Python, Numpy, or PyGame, I recommend the following Youtube playlist and video -

--

--

Shashank Goyal
Shashank Goyal

Written by Shashank Goyal

I'm Shashank Goyal, a passionate Dual Master's student at Johns Hopkins University, pursuing degrees in Computer Science and Robotics.

No responses yet