Overview: Generative Adversarial Networks – When Deep Learning Meets Game Theory
Note: This post was originally published on AH’s Blog (WordPress) on January 17, 2017, and has been migrated here.
Before diving into Generative Adversarial Networks (GANs), a few foundational concepts are worth establishing.
Key Concepts
| Discriminative Models predict a hidden class given observed features. They model the conditional probability **P(y | x₁, x₂, …, xₙ)**. Examples: SVMs, Feedforward Neural Networks. |
Generative Models learn the joint distribution of features and classes — P(x₁, x₂, …, xₙ, y) — enabling them to generate new samples from the learned distribution. Examples: Restricted Boltzmann Machines (RBMs), HMMs. Note: Vanilla Auto-encoders are not generative models (they reconstruct); Variational Auto-encoders (VAEs) are.
Nash Equilibrium (Game Theory): A stable game state where no player has an incentive to change their strategy after knowing the other players’ strategies. Each player is satisfied with their outcome given the others’ choices.
Minimax: An algorithm for two-player games where each player tries to minimize the maximum possible loss the opponent can inflict. Used in Chess, Tic-Tac-Toe, Connect-4, and other rule-based decision games.
Generative Adversarial Networks (GANs)

A GAN consists of two models competing during training:
- Generator (G): Produces fake samples intended to match the distribution of real data.
- Discriminator (D): Learns to distinguish real samples from the Generator’s fakes.
The dynamic is adversarial — G tries to fool D; D tries to catch G. This is precisely the Minimax setup: each player attempts to minimize the worst outcome the other can produce.
Training continues iteratively until both models become experts: G generates samples indistinguishable from real data, and D becomes highly accurate at classification. When neither model can improve by changing its strategy unilaterally, the system reaches Nash Equilibrium.
During training, a shared loss function updates each model’s parameters independently via backpropagation — neither model can directly modify the other’s weights.
Status
This was an overview written while still learning GANs. The follow-up post applies the concepts in more detail: GANs Part 2 — Camouflage your Predator!
References
- Goodfellow et al., NIPS 2016 Tutorial on GANs
- KDnuggets: GANs Overview
- Wikipedia: Generative Adversarial Networks
- Wikipedia: Minimax
- Wikipedia: Nash Equilibrium
- Stuart Russell and Peter Norvig, Artificial Intelligence: A Modern Approach
