NIKOLA Chess Engine
Supercomputer Chess Engine

Chess Engine for the AI Era

NIKOLA is a supercomputer-class chess engine built for modern data-center GPUs—from A100 through Blackwell B300. Written entirely in Mind Language, it uses Mind IR for NNUE evaluation and distributed endgame intelligence—without Python, without Rust, without dependencies.

3200+
Elo Target
SPRT-verified
100M+
NNUE Positions
scaling to 100B+
100M+
Opening Book
quality-filtered
B300
Blackwell Ready
multi-GPU clusters
UCI
Compatible
+ Lichess Bot

Core Technology

Built on cutting-edge GPU architectures and advanced AI algorithms

Artificial Intelligence

Alexander Kronrod, a pioneering Russian AI researcher, famously stated, "Chess is the Drosophila of AI." For humans, the "self" acts as a profound, unifying symbol—encompassing the player, their goal-driven system, and the interplay of mind and body. To replicate expert human strategy, we developed an advanced AI engine that prioritizes the strategic essence of chess. This engine employs deep analytical evaluation, drawing from a vast dataset of every recorded game and the styles of legendary players. The AI engine constantly evaluates the current game cycle and all algorithms running in parallel, then chooses the optimal strategy. Perfection is a much harder problem than simply being unbeatable—and that is our goal.

Numerical Analytics

Recent breakthroughs in unified memory architectures have revolutionized data access, enabling CPUs and GPUs to share ultra-high-speed memory seamlessly. NVIDIA's Blackwell architecture pushes this integration further with stacked HBM3e modules achieving bandwidths exceeding 3 terabytes per second. NVIDIA data-center GPUs from A100 through Blackwell B300 feature up to 192GB of HBM memory with NVLink 5.0 connectivity for optimized cluster operations. In collaboration with Dell, Supermicro, and NVIDIA, modern clusters harness Spectrum-X networking, NVLink 5.0, and Infiniband NDR800 to achieve GPU throughput efficiencies of up to 98%.

Blackwell Architecture

Next-generation supercomputers integrate hundreds of thousands of NVIDIA Blackwell GPUs. This CPU-GPU hybrid architecture represents a massive leap from earlier systems, enabling distributed chess computation at unprecedented scale. NVIDIA GPUDirect facilitates direct GPU communication, while Remote Direct Memory Access (RDMA) ensures high-throughput, low-latency transfers without OS overhead. Leveraging Spectrum-X networking, NVLink 5.0, and Infiniband NDR800, modern clusters sustain 95%+ throughput across thousands of GPUs.

Dynamic Parallelism

Dynamic parallelism marks a leap in GPU computing, allowing on-demand kernel spawning on the GPU without CPU involvement. Embedded in NVIDIA's CUDA framework, it empowers threads within a grid to configure, launch, and synchronize new grids. The hardware-based SPAWN framework addresses scheduling challenges, optimizing dynamically generated kernels. By controlling resource allocation, SPAWN cuts launch overheads and queuing delays. Within systems powered by Blackwell GPUs—with HBM3e memory exceeding 8 TB/s aggregate bandwidth—integrating SPAWN optimizes workloads like chess game-tree traversal.

Built with Mind Language

NIKOLA is written entirely in Mind, a systems programming language designed for high-performance computing with first-class support for parallelism and hardware optimization. Mind compiles to native code that rivals hand-tuned assembly while providing safe, expressive abstractions.

IR

Mind IR

Mind Intermediate Representation provides a low-level abstraction that maps directly to hardware, enabling optimal code generation for CPUs and GPUs. The IR supports SSA form, explicit memory management, and hardware-specific intrinsics for maximum control.

MIC

Mind Intrinsics Compiler

MIC compiles Mind code to native machine instructions with full support for AVX-512, AVX-VNNI, and AMX on Intel/AMD processors, plus SM100 PTX for NVIDIA Blackwell GPUs. The compiler performs aggressive optimizations while preserving programmer intent.

MAP

Mind Array Processing

MAP enables efficient array operations with automatic parallelization across CPU cores and GPU streaming multiprocessors. Perfect for batch position evaluation and neural network inference, MAP handles memory layout optimization and kernel fusion automatically.

Key Features

Everything you need for world-class chess computation

NNUE Evaluation

HalfKAv2 neural network architecture trained on over 100 million positions from Lichess master games. Features incremental accumulator updates. Supports AVX-512, AVX-VNNI, and AMX on CPU, plus CUDA 12.x with FP8/INT8 tensor cores for GPU acceleration.

Opening Book

Comprehensive opening book with over 100 million positions extracted from master-level games, stored in mmap-friendly Polyglot format. Includes weighted move selection based on win rates and supports multiple repertoires.

GPU Acceleration

Native support for NVIDIA data-center GPUs from A100 through Blackwell (B200/B300) architectures via Mind MAP. Leverages FP8 tensor cores for blazing-fast neural network inference.

Search Algorithm

State-of-the-art alpha-beta pruning with principal variation search (PVS), null move pruning, late move reductions (LMR), transposition tables with Zobrist hashing, killer moves, history heuristics, aspiration windows, and iterative deepening with pondering.

UCI Protocol

Full UCI (Universal Chess Interface) protocol support for seamless integration with popular chess GUIs including Arena, ChessBase, CuteChess, and Fritz. Also supports Lichess Bot API for automated online play.

Data Service

Connects to dedicated chess data servers via HTTP for opening book queries, NNUE position cache lookups, and Syzygy tablebase probing. Supports local caching and fallback modes for offline operation.

Technology Stack

NVIDIA CUDA

  • RDMA for GPUDirect
  • NVLink 5.0
  • CUDA cuBLAS
  • Dynamic Parallelism
  • Triton Inference Server
  • cuDNN 9.x
Mind Language
  • Mind IR Compiler
  • MIC Intrinsics
  • MAP Array Processing
  • SM100 PTX Backend
  • AVX-512/AMX Support
  • OpenCL 3.0

Chess Engine

  • NNUE HalfKAv2
  • Polyglot Opening Book
  • Syzygy Tablebases
  • UCI Protocol
  • Lichess Bot API
  • FIDE Time Controls

Infrastructure

  • Spectrum-X Networking
  • Infiniband NDR800
  • HBM3e Memory
  • RAPIDS Analytics
  • Nsight Systems
  • Container-ready

Get in Touch

Have questions about NIKOLA or want to contribute to the project? We'd love to hear from you. Join our community of chess enthusiasts and AI researchers.