In this paper, we present a proof-of-concept deep Q-learning reinforcement learning approach that combines recurrent Convolutional Neural Networks (rCNN) with quantum Convolutional Neural Networks (qCNN) to evaluate chess board states. The deep Q-score, generated by our CNN, is integrated into a Minimax alpha-beta pruning tree search algorithm, enabling near real-time assessment of board states and allowing our AI agent to select the optimal move efficiently. We compare our method with a similarly designed classical residual Convolutional Neural Network using the same tree search parameters to determine whether simulated quantum computing offers any advantage in chess-related neural networks, and to analyze their respective training and inference speeds.