IACR News
International Association for Cryptologic Research
International Association
for Cryptologic Research
IACR News
If you have a news item you wish to distribute, they should be sent
to
the communications secretary
. See also the
events database
for conference announcements.
Here you can see all recent updates to the IACR webpage. These updates are also available:
via email
via RSS feed
23 April 2026
Masking Ordering Failures in BFT SMR via Proactive Pre-Commit Execution
Jianting Zhang, Alberto Sonnino, Lefteris Kokoris-Kogias, Aniket Kate
Modern Byzantine fault-tolerant state machine replication (BFT SMR) systems adopt a decoupled BFT consensus process to separate data dissemination from transaction ordering as it enables efficient (asynchronous) dissemination even when ordering fails intermittently under partial synchrony. Nevertheless, they may still suffer from high transaction confirmation latency as the transaction-execution process waits for the ordering process to complete: when the ordering process stalls, the execution process does not proceed even when transactions are disseminated.
We propose Pufferfish, the first BFT SMR system that effectively masks intermittent ordering failures in practice. Pufferfish introduces a pre-commi execution scheme that enables replicas to speculatively execute transactions even during the ordering process stalls. These pre-commit execution results can be directly committed, if correct, when the ordering failures are resolved. To achieve this, Pufferfish builds an adaptive probabilistic speculation mechanism on top of a DAG-based BFT consensus protocol, enabling replicas to predict and speculatively execute transactions ahead of confirmed ordering. Additionally, Pufferfish adopts a commit-aware snapshot mechanism to minimize the overhead of transaction re-execution in cases of speculation failures. To demonstrate the effectiveness of Pufferfish, we implement and evaluate it on a geo-distributed AWS environment. The evaluation results show that Pufferfish achieves faster recovery and 1.36x speedup on the p99 transaction confirmation latency compared to the state-of-the-art BFT SMR in the presence of ordering failures. Even under normal execution, Pufferfish can achieve a 1.58x speedup on transaction confirmation latency under a transaction workload of 80k tps.
Expand
On the Decoding Failure Rate of HQC
Alessandro Annechini, Alessandro Barenghi, Gerardo Pelosi
Cryptography based on error correction codes has gained significant interest due
to its ability to provide security against both classical and quantum adversaries.
In 2025, the U.S. National Institute of Standards and Technology selected
the Hamming Quasi-Cyclic (HQC) key encapsulation mechanism for standardization.
A key aspect of HQC is the possibility of decryption failures, which reveal
information about the private key. To address this issue, the HQC authors developed
a probabilistic model for the decoding failure rate (DFR) of the underlying
error-correcting code, and adjusted the cryptosystem parameters to thwart attacks
based on decryption failures. However, the DFR model relies on the assumption of
independence between coordinates of the error vector, which does not hold in HQC.
This approximation yields conservative DFR estimates in regimes where failure
probabilities can be simulated, and it is hypothesized to remains conservative
for cryptographic-grade parameter sets.
In this work, we eliminate the independence assumptions and derive a new
closed-form DFR model for HQC. We demonstrate that the previous approximation
remains conservative in the cryptographic regime and that HQC's current decoding
failure rates are lower than the required ones. We describe optimization techniques
that enable our probabilistic model to serve as a parameter-tuning tool, and
demonstrate how the size of HQC public keys and ciphertexts can be slightly reduced
without compromising security.
Expand
sigma-rs: A Modular Approach for Keyed-Verification Anonymous Credentials
Michele Orru, Lindsey Tulloch, Victor Snyder-Graf, Ian Goldberg
We introduce a new software stack in Rust aimed at simplifying
constructions and deployments of protocols based on
modern anonymous credential systems.
The stack, called sigma-rs, through its layered design,
abstracts cryptographic complexity while remaining flexible enough to
support a range of credential schemes, proofs, and access policies.
It emphasizes misuse resistance via type safety, domain separation, and
prover-state discipline, and supports side-channel-aware constant-time
strategies.
We evaluate practicality through re-implementations of Tor’s Lox bridge
distribution protocols and of user authentication in the Open
Observatory for Network Interference.
Expand
Oriole: Adaptively Secure Partially Non-Interactive Threshold Signatures from Lattices
Kaijie Jiang, Hoeteck Wee, Chenzhi Zhu
ePrint Report
We present the first lattice-based, partially non-interactive threshold signature scheme that tolerates the adaptive corruption of up to $T-1$ signers, where $T$ is the signing threshold. Our construction relies on the MSIS and MLWE assumptions, and has two rounds, of which only the second is message-dependent. We substantially improve upon prior adaptively secure lattice-based schemes (CRYPTO '24 and EUROCRYPT '26), which require at least two message-dependent rounds. Compared to prior lattice-based partially non-interactive assumptions (CRYPTO '24, S\&P '25, CRYPTO '25), we achieve better communication complexity in addition to stronger security guarantees.
Expand
Equivocal Broadcast Encryption: Adaptively-Secure Optimal Distributed Broadcast Encryption from Lattices
Rishab Goyal, Saikumar Yadugiri
ePrint Report
We present the first Distributed Broadcast Encryption (DBE) scheme from falsifiable lattice assumptions that achieves adaptive security with optimal parameters (short public/secret keys and ciphertexts). Our construction enjoys transparent setup and offers flexible instantiation: we achieve a succinct CRS in the Random Oracle Model, or a long CRS in the standard model. Previously, no lattice-based DBE simultaneously achieved adaptivity and optimal parameters in either setting.
To achieve this, we introduce a new methodology for proving adaptive security: $\textit{Equivocal Encryption Systems}$. This framework operates in two indistinguishable modes: a 'real' mode utilizing standard algorithms, and a 'fake' mode where keys and ciphertexts are jointly sampled with auxiliary trapdoors, enabling the dynamic equivocation of ciphertexts to arbitrary challenge values. While our approach is technically distinct from the celebrated Dual System Encryption (Waters, CRYPTO'09), we believe it could serve as a similarly powerful paradigm for realizing adaptive security across a broad class of lattice-based encryption systems.
Expand
Experimental Validation of AUX scheme for Quantum Homomorphic Encryption on IBM Quantum Platforms
Gia Phat Dang, Weisheng Si, Belal Alsinglawi, Jim Basilakis
ePrint Report
Quantum Homomorphic Encryption (QHE) addresses Quantum Cloud Computing (QCC) security concerns by ensuring the privacy of a client’s data and algorithms when outsourced to untrusted third-party quantum servers. However, current QHE schemes face significant challenges: scaling computational resources introduces overhead and hardware noise, degrading accuracy and compromising security. This paper imple- ments and analyses a non-interactive AUX-QHE scheme that employs pre-generated auxiliary states for universal computation. We identify three critical computational bottlenecks: exponential growth in auxiliary state count, complex homomorphic evaluation, and extensive symbolic key updates. Through experimental evaluation on IBM Quantum hardware, we quantify the impact of NISQ noise on AUX-QHE performance and establish practical resource thresholds for deployment. Our results bridge the gap between theoretical QHE frameworks and their practical implementation on noisy quantum devices, providing concrete benchmarks for future noise mitigation efforts.
Expand
Towards a Field-Informed Risk-Based Framework for PQC Migration in Legacy Systems
Paul CHAMMAS, Khalil HARISS, Carole BASSIL, Maroun CHAMOUN
ePrint Report
Ongoing advances in quantum computing represent a growing risk to modern cryptography (potentially threatening both asymmetric and symmetric encryption protocols), thereby challenging the foundations of digital security. In response, global cybersecurity communities, led by standardization bodies such as NIST and ETSI, launched initiatives to establish migration pathways toward post-quantum cryptography (PQC).
However, the migration of legacy systems to quantum-safe cryptography presents many challenges that have not yet been addressed due to their limited cryptographic agility, outdated infrastructure, and regulatory constraints. These legacy environments, even though they rely on aging technologies and constrained hardware, are still vital to major sectors (such as finance, energy, healthcare, and government).
This paper explores some obstacles to the implementation of PQC in these environments, such as hard-coded cryptographic functions, outdated programming languages, hardware limitations, vendor lock-in, interoperability constraints, and certification issues. This shows that, in contrast to contemporary systems, legacy systems cannot be readily modified or easily re-engineered.
A critical review of existing standards and academic publications revealed key limitations: their focus on algorithms specifications, the abstract guidance provided without operational depth, the lack of empirical validations, and the insufficient risk modeling and attention to legacy constraints. These gaps prevent effective planning and secure execution of the PQC migration in legacy systems.
Consequently, this position paper argues that existing deliverables remain insufficient to address the specific challenges of PQC migration in legacy systems. It proposes the elaboration of a field-informed risk-based framework for PQC Migration in Legacy Systems to guide this transition. This proposed framework combines three interdepedent layers: a diagnostic characterization of legacy system constraints, a qualitative risk assessment grounded in those constraints, and a quantitative evaluation of migration options through an ROI-based analysis to support decision-making. Unlike existing approaches that treat legacy as generic labels, this framework begins by exploring what makes each system legacy in its specific context before applying the risk model. Its development is informed by an empirical survey conducted among large organizations across critical sectors, ensuring relevance beyond theoretical assumptions.
Future work will focus on elaborating the framework through applied research, tool development, and real-world case studies in collaboration with financial institutions and critical infrastructure operators. In addition, continued engagement with cyber authorities and standardization bodies will help us ensure alignment with emerging regulations.
Expand
Foundations of Verifiably Encrypted (Blind) Signatures
Diego Castejon-Molina, Erkan Tairi, Dimitrios Vasilopoulos, Pedro Moreno-Sanchez
ePrint Report
Many blockchain-based applications can be seen as instances of fair exchange of two signatures. Adaptor signatures (AS) and, more concretely, their extractability property, are commonly combined with blockchain-based economic incentives to achieve fairness in the exchange of two signatures in the blockchain. Certain blockchain applications require unique signatures (e.g., BLS), but it is formally impossible to build AS from unique signatures. Other applications need blind signatures, however, we found a tension between extractability and blindness. To address these limitations, we observe that fair exchange protocols based on AS only require extractability for one of the two exchanged signatures. This observation allows the other AS to be replaced with a primitive that provides similar security guarantees without inheriting the limitations of AS with respect to unique and blind signatures. A natural candidate is verifiably encrypted signatures (VES), introduced by Boneh et al. (Eurocrypt'03). However, this primitive predates blockchain systems and relies on a trusted party, the adjudicator.
Our first contribution is to eliminate the need for an adjudicator by shifting trust to the blockchain and redefining the VES security model accordingly. We introduce two new security notions and prove that our notions imply existing guarantees. We revisit classical VES constructions by Boneh et al. (Eurocrypt'03) for unique signatures and by Hanser et al. (ESORICS'15) for probabilistic signatures, and show that they satisfy our new definitions. Furthermore, we compare our new notions with AS, and conclude that our revised VES is equivalent in terms of security to AS without extractability. Our second contribution extends VES to support blind and non-interactive blind signatures, introducing a new primitive: Verifiably Encrypted Blind Signatures (VEBS). We present a novel construction for non-interactive blind signatures and prove its security. We implement our construction and demonstrate its practical efficiency: encryption requires 3 ms, verification 6 ms, and decryption 13 ms, with a communication cost of 912 bytes. Finally, we discuss how VES/VEBS apply to diverse use cases, including anonymous credentials, contingent payments, atomic swaps, intermediated payments, coin mixing, and applications involving blind signatures.
Expand
Secret-Carrying Puzzles and Garbled Circuits Optimized for Zero-knowledge Proofs
Debasish Ray Chawdhuri, Manoj Prabhakaran
ePrint Report
In this work, we introduce the concept of Obliviously Checkable
Secret-Carrying Puzzles (OxSP) and build proof-friendly Garbled Circuits
(GCs) to enable their practical implementation. OxSPs allow one to publicly pose
puzzles and verify purported solutions received in response, keeping
the desired parts of the puzzles and the responses hidden.
We show how OxSPs can be based on Garbled Circuits (GCs). However, this
requires ZK-SNARK proofs of correctness of garbling. We note that combining
existing GC and ZK-SNARK constructions results in very large computational
costs for the OxSP solvers. Our main technical contribution is to design a
new proof-friendly GC construction which cuts down the cost of generating a
proof of correct garbling to almost a third, without resorting to
non-standard cryptographic assumptions.
Beyond its use in OxSP, we expect our proof-friendly GCs to be of significant
independent interest, as a tool for auditable secure 2-party computation.
Expand
Efficient Construction of Threshold BBS+ Signatures and its Extensions
Yang Heng, Mengling Liu, Xingye Lu, Haiyang Xue, Zijian Bao, Man Ho Au
ePrint Report
BBS+ signatures are widely adopted in privacy-preserving systems such as anonymous credentials and Direct Anonymous Attestation (DAA). To strengthen key security and eliminate single points of failure, threshold variants of BBS+ signatures have become increasingly important. However, existing constructions suffer from notable inefficiencies: some entail excessive communication overhead (e.g., DKL+23, S&P 2023), while others impose substantial computational costs and require additional interaction rounds (e.g., WMC24, NDSS 2024).
In this work, we present a novel and efficient three-round threshold BBS+ signature scheme from the Castagnos–Laguillaumie (CL) cryptosystem. Our construction achieves best communication–computation trade-offs than previous works. Specifically, compared to the four-round WMC24 scheme, our protocol reduces communication by $77.4\%$ and demonstrates faster computation, with benchmarks indicating speedups of $10.6$--$16.6\times$ in single-threading and $3.3$--$5.4\times$ in multi-threading. Against the three-round protocol DKL+23, our scheme exhibits an asymptotic slowdown factor of $4\times$, but enhances communication by two orders of magnitude.
We further extend our techniques to threshold BBS signatures, Dodis-Yampolskiy verifiable random functions (DY VRFs), and multiplication protocols (DNP25 and LLZ+25, CCS'25). This yields: (1) a three-round threshold protocol for the original BBS scheme; (2) two-round threshold protocols for both DY VRFs (focusing on its oblivious variant) and the AGM-secure BBS variant; and (3) one fewer group element in broadcasts for the multiplication protocol with reduced ZKP costs via simplified relations.
Expand
Integral Resistance and Degree Bounds for Complex Linear Layers: Application to PRINCE and Lower-Latency Alternatives
Simon Gerhalter, Maria Eichlseder
ePrint Report
The integral-resistance property provides strong arguments against integral distinguishers. Recently, Zeng and Tian proposed a new method to show this property for AES. In this paper, we provide a generalized framework and tool called intres to extend and apply this method to other ciphers with complex linear layers. We derive properties that a cipher must fulfill in order for the method to be applicable. Furthermore, we introduce a degree propagation model which helps us determine the valid key masks for the integral-resistance matrix. The degree model can also be used to upper-bound the algebraic degree of cipher constructions. This allows us to provide tighter upper bounds for the degree of Rijndael-256. We propose algorithmic improvements to substantially decrease the runtime of the offline phase with the intres framework. As a result, we are able to show the integral-resistance property for 7 rounds of PRINCE and 6 rounds of Beanie. Finally, we develop a heuristic MILP-based approach to search for lower-latency alternatives to the MixColumns matrices of PRINCE while maintaining integral resistance. After showing that using this new matrix we still achieve 7-round integral resistance, we validate our method with SAT-based trail counting. While using a MixColumns matrix only optimized for integral resistance might affect security against other types of attacks, we believe these lower-latency matrices have their place in constructions similar to ZIP-ciphers, where integral resistance is particularly critical.
Expand
Neural Leakage–based Cryptanalysis of LowMC with Linear Complexity
Kwangjo Kim
ePrint Report
MPC-in-the-Head protocols enable post-quantum digital signatures based solely on symmetric primitives, with PICNIC being a prominent example built on the LowMC block cipher. While existing analyses assume exact Boolean circuit semantics, recent advances in neural representations suggest that piecewise-linear implementations may introduce activation boundary leakage. In this work, we investigate whether such leakage can be exploited in the context of LowMC and MPC-in-the-Head transcripts. We propose a perturbation-based probing methodology that models neural leakage and reduces round-key recovery to independent binary hypothesis tests via majority voting. Exploiting the linear structure of the LowMC key schedule, we demonstrate that recovery of the first-round key enables efficient reconstruction of the master key with linear complexity. Experimental results confirm successful recovery of 128-, 192-, and 256-bit keys under the proposed model, highlighting a new dimension in symmetric cryptanalysis and the need to consider learning-based leakage in future designs.
Expand
Secure and Updatable Single Password Authentication
Devriş İŞLER, HamidReza Saadi Dadmarzi, Alptekin Küpçü
ePrint Report
Passwords remain the default mechanism for user authentication despite well known weaknesses such as offline dictionary attacks and pervasive password reuse. Single Password Authentication (SPA) solutions mitigate these risks by securely protecting high entropy authentication secrets under a single human memorable password and storing them across untrusted storage provider(s). However, existing SPA schemes leave important practical gaps: storage providers cannot verify password knowledge, enabling preemption and overwrite attacks on stored shares, and current designs do not securely and efficiently support secret or password updates.
We present $\mathsf{UpSPA}$, an efficient, secure, and updatable threshold SPA that closes these gaps without necessitating login-server-side changes.
$\mathsf{UpSPA}$ introduces a high entropy, storage-provider-specific identifier secret to prevent preemption, enables secret updates via implicit authentication, and supports password updates via explicit authentication using a password protected signing key.
We prove security in the ideal-real paradigm, including resistance to offline dictionary attacks under standard static threshold corruption assumptions. Our evaluation shows that $\mathsf{UpSPA}$ incurs low overhead on commodity hardware and remains competitive across threshold settings relative to prior SPA work that does not support password updates.
Expand
Batch-Puncturing Circuit CP-ABE (and More) from Lattices
Yongkang Lang, Fangguo Zhang, Jianghong Wei, Xinyi Huang, Xiaofeng Chen
ePrint Report
Puncturable attribute-based encryption ($\mathsf{PABE}$) not only supports fine-grained access control over encrypted data, but also enables users to revoke the decryption capability for specific messages by puncturing tags, thereby achieving fine-grained forward security. It finds wide applications in scenarios such as sharing government classified documents and personal health records. However, existing $\mathsf{PABE}$ schemes only support tag-by-tag puncturing, where each puncturing operation is done through key delegation, which causes the key size to grow with the number of punctured tags. This inefficiency makes $\mathsf{PABE}$ impractical in scenarios that require frequent puncturing or mass revocations. To address this limitation, it is crucial to support batch puncturing of tags, i.e., the decryption capability for messages associated with multiple tags can be revoked simultaneously via a single puncture.
In this work, we construct a ciphertext-policy attribute-based encryption ($\mathsf{CPABE}$) scheme for circuits with batch-puncturing. Notably, the size of the punctured key in our scheme is independent of the number of punctured tags, as well as the size and depth of the circuits. This is achieved by leveraging the evasive learning with errors ($\mathsf{LWE}$) and tensor $\mathsf{LWE}$ assumptions. In addition, we observe that puncturable $\mathsf{CPABE}$ can be re-stated by dual-policy $\mathsf{ABE}$ ($\mathsf{DPABE}$) with key delegation, and generalize batch-puncturing $\mathsf{CPABE}$ to provide the first lattice-based construction of $\mathsf{DPABE}$ for circuits. Moreover, inspired by the observation of Agrawal and Yamada (Eurocrypt '20), we introduce the puncturing property into optimal broadcast encryption ($\mathsf{BE}$), capturing a new primitive called puncturable $\mathsf{BE}$, which allows the receiver to securely erase sensitive messages without communicating with the authority.
Expand
Failure of proximity gaps close to capacity
Dmitry Krachun, Stepan Kazanin, Ulrich Haböck
ePrint Report
We give a simple counterexample which shows that, for Reed--Solomon codes over multiplicative subgroups of prime fields, proximity gaps do not hold near capacity, at least not as conjectured by Ben-Sasson, et al., in BCIKS20.
For relative distance $\theta = 1-\rho-\eta$, where $\rho$ is the rate of the code, and positive $\eta = \Theta_\rho(1/\log n)$, where $n$ is the length of the code, we construct an affine line that is not entirely $\theta$-close to the code but still contains $2^{\Omega_\rho(1/\eta)}$ such points. The same construction gives a slightly stronger list-decoding lower bound. The proof uses a new additive-combinatorics lemma on sums of roots of unity.
Expand
Panther: Robust Hybrid KEM Combiners via Structural Splicing
Basker Palaniswamy, Paolo Palmieri, Ashok Kumar Das, Chun-I Fan
ePrint Report
We present Panther, a family of six robust hybrid key encapsulation mechanism (KEM) combiners that pair FrodoKEM (unstructured LWE) with ML-KEM (module-LWE, FIPS 203) so that IND-CCA2 security holds whenever either assumption is hard. The family includes five hardened variants of the textbook combiners—parallel HKDF, SHAKE256 splitkey, sequential chaining, XOR, and nested—each made to satisfy a uniform robustness predicate (transcript binding, domain separation, implicit rejection, length normalisation, ∨-security), together with a novel structural-splicing construction Panther-SS that interleaves the constituent ciphertexts and binds the cut-positions via a structural tag. Every combiner admits a systematic Market-Theoretic Security Framework proof in which each bidding round is documented by
its purpose, the scheme component it replaces, and its complexity cost; the framework extends cleanly to correctness, unbounded session security, QROM security, and quantitative side-channel
resistance.
We complement the theory with extensive benchmarks on liboqs-backed reference implementations, including a head-to-head comparison of Panther combiners against the keyencapsulation candidates that appeared in NIST PQC Rounds 1–4 (Kyber/ML-KEM, FrodoKEM,
NTRU, SABER, NTRU Prime, Classic McEliece, BIKE, HQC). The experiments cover keygen/encaps/decaps latency, throughput, memory footprint, ciphertext and key sizes, scaling with query count, CPU-cycle counts, security-vs-performance Pareto analysis, and an attack-vsdefence matrix against published side-channel attacks on both constituents. The results confirm that hybrid robustness is essentially free over the slower constituent, that Panther-SS uniquely achieves full robustness with combiner-only overhead below half a percent of total latency, and
that the Panther family sits on the Pareto frontier of post-quantum KEM candidates.
Expand
Montgomery Multiplication in Signed Redundant Representations
Thomas Pornin
ePrint Report
In this paper, we explore the use of Montgomery multiplication with a multi-limb redundant representation of integers, in particular in combination with signed reduction factors. We develop techniques that are particularly suited to software platforms on which carry propagation is expensive, in particular RISC-V CPUs which lack hardware support for carries. We also show how to perform a whole-primitive range analysis that demonstrates that overflows are not possible, thus allowing liberal use of unreduced limb-wise additions and subtractions, which are small and fast. The implementation and analysis techniques are illustrated in a codegolfing exercise, to produce size-optimized implementations of ECDSA signature verification over NIST curve P-256; use of a virtual CPU with a custom instruction set with byte-size encoding ("bytecode") allows the production of an implementation as small as 848 bytes on x86 CPUs (in 64-bit mode); RISC-V (984 bytes), Armv8-A (1136 bytes) and portable C implementations (about 2200 to 2800 bytes) are also provided. In the process, an AI is utterly discomfited.
Expand
And TLS lived happily ever after
Michael Scott, Gora Adj, Francisco Rodríguez-Henríquez
ePrint Report
The plausible threat of a Cryptographically Relevant Quantum Computer (CRQC) has rightly stimulated a move away from traditional methods of asymmetric cryptography to new post-quantum secure equivalents. Digital signature is the cryptographic primitive that authenticates an internet server’s identity by signing each certificate in an X.509 certificate chain. A suggested response to the CRQC threat is to deploy a hybrid classical/post-quantum digital signature, combining a traditional tried-and-tested scheme with a post-quantum alternative, where certificates are signed using both methods. Here we propose a fused signature scheme that adopts the same approach, but introduces minimal friction into existing TLS architectures
Expand
Cobra: All-in-one for full-fledged defense — a hybrid nested KEM
Basker Palaniswamy, Paolo Palmieri, Ashok Kumar Das, Chun-I Fan
ePrint Report
The transition to post-quantum cryptography (PQC) is constrained by the limited cryptanalytic history of individual PQC algorithms. Hybrid constructions, which combine several
primitives so that breaking the hybrid requires breaking each component, address this concern directly. This paper presents Cobra, a hybrid Key Encapsulation Mechanism (KEM)
that integrates FrodoKEM (unstructured LWE), ML-KEM (FIPS 203 module-LWE), HQC
(code-based), and a Dummy KEM for agility, and analyses all 15 mathematically distinct
composition methods spanning parallel, cascading, multi-stage, and nested topologies.
We prove that every Cobra method achieves IND-CCA2 security within the MarketTheoretic Security Framework (MTSF), which subsumes and strictly extends both Universal
Composability and the Random Oracle Model. An explicit 10-round bidding-round chain per
method yields post-quantum ask prices of approximately 2−127 at NIST Level 1 together
with composability under arbitrary TLS 1.3 embeddings, per-session CNF auditing, and
unbounded-session security via pinging. Although all fifteen methods are security-equivalent,
encapsulation latency varies by 3.2× (1.2–3.8 ms) and Theorem 7.1 reduces deployment
selection to a Pareto-optimal set of five archetypes. Three real-world TLS 1.3 case studies
(financial, healthcare, government) confirm the prediction, with infrastructure overhead
clustering at 15–22% across sectors.
Expand
Summer School on the Theory and Practice of Blockchain Consensus (Columbia University)
New York, USA, 26 May - 29 May 2026
Event Calendar
Event date: 26 May to 29 May 2026
Submission deadline: 25 April 2026
Notification: 30 April 2026
Expand
Next ►