Documentation

VCVio.CryptoFoundations.Asymptotics.Security

Asymptotic Security Experiments and Games #

This file defines asymptotic security experiments and games where the advantage function is abstract — not tied to any specific game formulation. This allows the same meta-theorems (reductions, game-hopping, hybrid arguments) to work for failure-based games (SecExp), distinguishing games (ProbComp.distAdvantage), and any other advantage metric.

Main Definitions #

Main Results #

structure SecurityExp :

An asymptotic security experiment: an advantage function of the security parameter. Secure means the advantage is negligible.

Instances For

    An asymptotic security experiment is secure if its advantage is negligible.

    Instances For
      theorem SecurityExp.secure_of_pointwise_bound (ase : SecurityExp) (f : ENNReal) (hf : negligible f) (hbound : ∀ (n : ), ase.advantage n f n) :
      ase.secure

      If the advantage at each n is bounded by f n, and f is negligible, then the experiment is secure.

      Smart constructors #

      noncomputable def SecurityExp.ofSecExp {m : TypeType u_1} [(n : ) → Monad (m n)] (exp : (n : ) → SecExp (m n)) :

      Build from a family of failure-based SecExp.

      Instances For
        noncomputable def SecurityExp.ofDistExp (game₀ game₁ : ProbComp Unit) :

        Build from a two-game distinguishing experiment. Advantage = |Pr[= () | game₀ n] - Pr[= () | game₁ n]|.

        Instances For
          noncomputable def SecurityExp.ofGuessExp (game : ProbComp Unit) :

          Build from a single-game guessing experiment. Advantage = |1/2 - Pr[= () | game n]|.

          Instances For

            Asymptotic Security Games #

            A security game is parameterized by an adversary type. The advantage function maps each adversary and security parameter to a non-negative extended real.

            The predicate isPPT is left abstract; users specialize it to PolyQueries or other efficiency notions as appropriate.

            structure SecurityGame (Adv : Type u_1) :
            Type u_1

            An asymptotic security game: maps each adversary and security parameter to an advantage value. Decoupled from any specific game formulation — the same meta-theorems work for failure-based, distinguishing, and other game styles.

            Instances For
              def SecurityGame.secureAgainst {Adv : Type u_1} (g : SecurityGame Adv) (isPPT : AdvProp) :

              A game is secure against a class of adversaries (specified by isPPT) if every adversary in that class has negligible advantage.

              Instances For
                def SecurityGame.toSecurityExp {Adv : Type u_1} (g : SecurityGame Adv) (A : Adv) :

                Fixing an adversary in a game produces an asymptotic security experiment.

                Instances For
                  @[simp]
                  theorem SecurityGame.toSecurityExp_advantage {Adv : Type u_1} (g : SecurityGame Adv) (A : Adv) :

                  Smart constructors #

                  noncomputable def SecurityGame.ofSecExp {Adv : Type u_1} {m : TypeType u_2} [(n : ) → Monad (m n)] (game : Adv(n : ) → SecExp (m n)) :

                  Build from a family of failure-based SecExp.

                  Instances For
                    noncomputable def SecurityGame.ofDistGame {Adv : Type u_1} (game₀ game₁ : AdvProbComp Unit) :

                    Build from a two-game distinguishing experiment. Advantage = |Pr[= () | game₀ A n] - Pr[= () | game₁ A n]|.

                    Instances For
                      noncomputable def SecurityGame.ofGuessGame {Adv : Type u_1} (game : AdvProbComp Unit) :

                      Build from a single-game guessing experiment. Advantage = |1/2 - Pr[= () | game A n]|.

                      Instances For

                        Security reductions #

                        theorem SecurityGame.secureAgainst_of_reduction {Adv : Type u_1} {Adv' : Type u_2} {g : SecurityGame Adv} {g' : SecurityGame Adv'} {isPPT : AdvProp} {isPPT' : Adv'Prop} {reduce : AdvAdv'} (hreduce : ∀ (A : Adv), isPPT AisPPT' (reduce A)) (hbound : ∀ (A : Adv) (n : ), g.advantage A n g'.advantage (reduce A) n) (hsecure : g'.secureAgainst isPPT') :

                        Basic security reduction: if there is a map reduce : Adv → Adv' that preserves efficiency and the advantage of g is pointwise ≤ the advantage of g' on the reduced adversary, then security of g' implies security of g.

                        Game hopping #

                        theorem SecurityGame.secureAgainst_of_close {Adv : Type u_1} {g₁ g₂ : SecurityGame Adv} {isPPT : AdvProp} {ε : ENNReal} ( : negligible ε) (hclose : ∀ (A : Adv), isPPT A∀ (n : ), g₁.advantage A n g₂.advantage A n + ε n) (hsecure : g₂.secureAgainst isPPT) :
                        g₁.secureAgainst isPPT

                        Game-hopping step: if the advantage of g₁ at every security parameter is at most the advantage of g₂ plus some negligible ε, and g₂ is secure, then g₁ is secure.

                        This is the fundamental lemma for game-hopping proofs: each "hop" from g₁ to g₂ introduces at most ε(n) advantage loss, and ε is absorbed because negligible functions are closed under addition.

                        theorem SecurityGame.secureAgainst_of_close_reduction {Adv : Type u_1} {Adv' : Type u_2} {g₁ : SecurityGame Adv} {g₂ : SecurityGame Adv'} {isPPT : AdvProp} {isPPT' : Adv'Prop} {reduce : AdvAdv'} {ε : ENNReal} ( : negligible ε) (hreduce : ∀ (A : Adv), isPPT AisPPT' (reduce A)) (hclose : ∀ (A : Adv), isPPT A∀ (n : ), g₁.advantage A n g₂.advantage (reduce A) n + ε n) (hsecure : g₂.secureAgainst isPPT') :
                        g₁.secureAgainst isPPT

                        Game-hopping step with a reduction: if the advantage of g₁ with adversary A is at most the advantage of g₂ with reduced adversary plus ε, then security of g₂ (against the target class) implies security of g₁. Combines reduction and game hop.

                        Polynomial-loss reductions #

                        theorem SecurityGame.secureAgainst_of_poly_reduction {Adv : Type u_1} {Adv' : Type u_2} {g : SecurityGame Adv} {g' : SecurityGame Adv'} {isPPT : AdvProp} {isPPT' : Adv'Prop} {reduce : AdvAdv'} {loss : Polynomial } (hreduce : ∀ (A : Adv), isPPT AisPPT' (reduce A)) (hbound : ∀ (A : Adv) (n : ), g.advantage A n (Polynomial.eval n loss) * g'.advantage (reduce A) n) (hsecure : g'.secureAgainst isPPT') :

                        Polynomial-loss security reduction: if there is a reduction reduce : Adv → Adv' that preserves efficiency and the advantage of g is at most loss(n) times the advantage of g' on the reduced adversary, then security of g' implies security of g.

                        This handles reductions where the adversary's advantage is amplified by a polynomial factor, e.g., from a hybrid argument guessing which of poly(n) steps to exploit.

                        theorem SecurityGame.secureAgainst_of_close_poly_reduction {Adv : Type u_1} {Adv' : Type u_2} {g₁ : SecurityGame Adv} {g₂ : SecurityGame Adv'} {isPPT : AdvProp} {isPPT' : Adv'Prop} {reduce : AdvAdv'} {loss : Polynomial } {ε : ENNReal} ( : negligible ε) (hreduce : ∀ (A : Adv), isPPT AisPPT' (reduce A)) (hclose : ∀ (A : Adv), isPPT A∀ (n : ), g₁.advantage A n (Polynomial.eval n loss) * g₂.advantage (reduce A) n + ε n) (hsecure : g₂.secureAgainst isPPT') :
                        g₁.secureAgainst isPPT

                        Combined game-hopping step with polynomial advantage loss and reduction.

                        Hybrid argument #

                        theorem SecurityGame.secureAgainst_of_hybrid {Adv : Type u_1} {games : SecurityGame Adv} {isPPT : AdvProp} {ε : ENNReal} ( : negligible ε) {k : } (hconsec : j < k, ∀ (A : Adv), isPPT A∀ (n : ), (games j).advantage A n (games (j + 1)).advantage A n + ε n) (hsecure : (games k).secureAgainst isPPT) :
                        (games 0).secureAgainst isPPT

                        Hybrid argument: given k + 1 games indexed by 0, 1, ..., k, if consecutive games differ by at most ε(n) in advantage and the final game is secure, then the first game is also secure.

                        The total advantage loss is at most k · ε(n), which is negligible since ε is negligible and k is constant.