This document describes a simple AI game called Jumper that uses a neural network to control a character that must jump to catch falling balls. The neural network has one input layer, two hidden layers, and one output layer, and is trained using supervised learning with manual input to output whether the character is on the floor or in the air. It considers the velocity of the ball and displacement between the ball and player as inputs, and uses a maximum membership defuzzifying technique inspired by the T-Rex Chrome game.