Convexity of the Feasible Region in Linear Programming

Exploring the cinematic intuition of Convexity of the Feasible Region in Linear Programming.

Visualizing...

Our institutional research engineers are currently mapping the formal proof for Convexity of the Feasible Region in Linear Programming.

Apply for Institutional Early Access →

The Formal Theorem

Let S S be the feasible region defined by a system of linear inequalities Axb A\mathbf{x} \leq \mathbf{b} and x0 \mathbf{x} \geq 0 . If x1S \mathbf{x}_1 \in S and x2S \mathbf{x}_2 \in S , then any convex combination xλ=λx1+(1λ)x2 \mathbf{x}_\lambda = \lambda \mathbf{x}_1 + (1 - \lambda) \mathbf{x}_2 for λ[0,1] \lambda \in [0, 1] must satisfy the constraints. Specifically, for any linear constraint aiTxbi \mathbf{a}_i^T \mathbf{x} \leq b_i :
aiT(λx1+(1λ)x2)bi \mathbf{a}_i^T (\lambda \mathbf{x}_1 + (1 - \lambda) \mathbf{x}_2) \leq b_i

Analytical Intuition.

Imagine the feasible region as a vast, crystalline chamber constructed from flat, obsidian walls of linear constraints. When we speak of 'convexity', we are describing the structural integrity of this chamber. If you were to pick any two points x1 \mathbf{x}_1 and x2 \mathbf{x}_2 residing anywhere inside this room—whether on the floor, clinging to the ceiling, or floating in the center—the straight-line path connecting them never pierces a wall to exit into the forbidden exterior. Mathematically, the linear nature of our constraints Axb A\mathbf{x} \leq \mathbf{b} ensures that the weighted average λx1+(1λ)x2 \lambda \mathbf{x}_1 + (1 - \lambda) \mathbf{x}_2 maintains the balance of the inequality. Because each constraint is a half-space, and the intersection of any number of half-spaces is inherently convex, our feasible region forms a 'convex polytope'. This is the 'Golden Rule' of optimization: because the terrain has no holes, no hidden coves, and no jagged inward dents, any local peak we find on this landscape is guaranteed by necessity to be the global summit.
CAUTION

Institutional Warning.

Students often conflate 'convexity' with 'compactness'. A feasible region can be convex but unbounded (e.g., in a minimization problem with an open constraint set). Convexity guarantees that no 'local' traps exist, but it does not guarantee that a finite optimal solution exists without further bounds.

Academic Inquiries.

01

Why is the intersection of half-spaces always convex?

A half-space {x:aTxb} \{\mathbf{x} : \mathbf{a}^T \mathbf{x} \leq b \} is convex. Since the intersection of any collection of convex sets remains convex, the feasible region, formed by intersecting multiple half-spaces, inherits this property.

02

Does non-convexity make optimization impossible?

Not impossible, but significantly harder. In non-convex optimization, gradient-based methods can get trapped in local optima, requiring global search heuristics or stochastic methods to locate the true global maximum.

Standardized References.

  • Definitive Institutional SourceBoyd, S., & Vandenberghe, L., Convex Optimization.

Institutional Citation

Reference this proof in your academic research or publications.

NICEFA Visual Mathematics. (2026). Convexity of the Feasible Region in Linear Programming: Visual Proof & Intuition. Retrieved from https://nicefa.org/library/fundamentals-of-optimization/convexity-of-the-feasible-region-in-linear-programming

Dominate the Logic.

"Abstract theory is just a movement we haven't seen yet."