Part 1: Can a Single Line Separate the Classes?
Select a logic gate and try to draw a line that separates the green (output=1) points from the red (output=0) points.
| x1 | x2 | Output |
|---|---|---|
| 0 | 0 | 0 |
| 0 | 1 | 1 |
| 1 | 0 | 1 |
| 1 | 1 | 1 |
Perceptron (Single Neuron)
Activation Function
Perceptron Weights
Adjust weights to find a separating line
Part 2: Solving XOR with a Hidden Layer
A hidden layer transforms the input space, making the problem linearly separable. Adjust the weights to see how the transformation works. One approach: h₁ detects "at least one input is on" (OR) and h₂ detects "not both are on" (NAND), then combine with AND. Another approach: h₁ detects "x₁ is on but not x₂" and h₂ detects "x₂ is on but not x₁", then combine with OR.
Network Architecture
Activation Function
h1 Weights
h1 = σ(w11·x1 + w12·x2 + b1)
Boundary: w11·x1 + w12·x2 + b1 = 0
h2 Weights
h2 = σ(w21·x1 + w22·x2 + b2)
Boundary: w21·x1 + w22·x2 + b2 = 0
Output Layer Weights
How XOR Works
XOR = h1 AND h2
Adjust the weights to find a valid XOR solution.
Forward Pass Computation
| Input (x1,x2) | Hidden (h1,h2) | Output (ŷ) | Target | Correct? |
|---|---|---|---|---|
| (0, 0) | - | - | 0 | - |
| (0, 1) | - | - | 1 | - |
| (1, 0) | - | - | 1 | - |
| (1, 1) | - | - | 0 | - |
The Key Insight
The XOR function outputs 1 when exactly one input is 1. In the input space, the two classes (0 and 1) are arranged diagonally — no single line can separate them.
The hidden layer acts as a feature transformation. Each hidden neuron computes:
where σ is the step function. This transforms the 4 input points into a new 2D space where they can be linearly separated.
There are infinitely many sets of weights that correctly solve XOR — the two presets above are just two examples of fundamentally different approaches.