MC Simulator [v0.2.9 Public] [APK] !!TOP!!
Forward-secure encryption (FS-PKE) is a key-evolving public-key paradigm that preserves the confidentiality of past encryptions in case of key exposure. Updatable public-key encryption (UPKE) is a natural relaxation of FS-PKE, introduced by Jost et al. (Eurocrypt'19), which is motivated by applications to secure messaging. In UPKE, key updates can be triggered by any sender -- via special update ciphertexts -- willing to enforce the forward secrecy of its encrypted messages. So far, the only truly efficient UPKE candidates (which rely on the random oracle idealization) only provide rather weak security guarantees against passive adversaries as they are malleable. Also, they offer no protection against malicious senders willing to hinder the decryption capability of honest users. A recent work of Dodis et al. (TCC'21) described UPKE systems in the standard model that also hedge against maliciously generated update messages in the chosen-ciphertext setting (where adversaries are equipped with a decryption oracle). While important feasibility results, their constructions lag behind random-oracle candidates in terms of efficiency. In this paper, we first provide a drastically more efficient UPKE realization in the standard model using Paillier's Composite Residuosity (DCR) assumption. In the random oracle model, we then extend our initial scheme so as to achieve chosen-ciphertext security, even in a model that accounts for maliciously generated update ciphertexts. Under the DCR and Strong RSA assumptions, we thus obtain the first practical UPKE systems that satisfy the strongest security notions put forth by Dodis et al.
MC Simulator [v0.2.9 Public] [APK]
Parallel computation is an important aspect of multi-party computation, not only in terms of improving efficiency, but also in terms of providing privacy for computation involving conditional branching based on private data. While applying multi-party computation in parallel over several sets of input data is straightforward if the partitioning of the input data into sets is publicly known, the problem becomes much more challenging when this partitioning is private. This setting is relevant to broad class of secure computations, in particular to secure graph and database analysis in which the underlying data (graph or database) is private. In this paper, we consider a general class of functions which can be expressed via the iterative evaluation of a binary associative operation, and propose efficient protocols for evaluating such functions in parallel over privately partitioned input data. Our protocols are optimal in terms of the required number of evaluations of the underlying binary operation (i.e.\ N-1 evaluations for total input size N), while simultaneously achieving a round complexity which is only logarithmic in the total size of the input data (i.e.\ O(łog N)).
Zero-Knowledge protocols have increasingly become both popular and practical in recent years due to their applicability in many areas such as blockchain systems. Unfortunately, public verifiability and small proof sizes of zero-knowledge protocols currently come at the price of strong assumptions, large prover time, or both, when considering statements with millions of gates. In this regime, the most prover-efficient protocols are in the designated verifier setting, where proofs are only valid to a single party that must keep a secret state.
In this work, we bridge this gap between designated-verifier proofs and public verifiability by distributing the verifier efficiently. Here, a set of verifiers can then verify a proof and, if a given threshold t of the n verifiers is honest and trusted, can act as guarantors for the validity of a statement. We achieve this while keeping the concrete efficiency of current designated-verifier proofs, and present constructions that have small concrete computation and communication cost. We present practical protocols in the setting of threshold verifiers with t
Modern JavaScript engines that power websites and even full applications on the Web are driven by the need for an increasingly fast and snappy user experience. These engines use several complex and potentially error-prone mechanisms to optimize their performance. Unsurprisingly, the inevitable complexity results in a huge attack surface and varioustypes of software vulnerabilities. On the defender's side, fuzz testing has proven to be an invaluable tool for uncovering different kinds of memory safety violations. Although it is difficult to test interpreters and JIT compilers in an automated way, recent proposals for input generation based on grammars or target-specific intermediate representations helped uncovering many software faults. However, subtle logic bugs and miscomputations that arise from optimization passes in JIT engines continue to elude state-of-the-art testing methods. While such flaws might seem unremarkable at first glance, they are often still exploitable in practice. In this paper, we propose a novel technique for effectively uncovering this class of subtle bugs during fuzzing. The key idea is to take advantage of the tight coupling between a JavaScript engine's interpreter and its corresponding JIT compiler as a domain-specific and generic bug oracle, which in turn yields a highly sensitive fault detection mechanism. We have designed and implemented a prototype of the proposed approach in a tool called JIT-Picker. In an empirical evaluation, we show that our method enables us to detect subtle software faults that prior work missed. In total, we uncovered 32 bugs that were not publicly known and received a $10.000 bug bounty from Mozilla as a reward for our contributions to JIT engine security.
We present a set of attacks on the OpenPGP specification and implementations of it which result in full recovery of users' private keys. The attacks exploit the lack of cryptographic binding between the different fields inside an encrypted private key packet, which include the key algorithm identifier, the cleartext public parameters, and the encrypted private parameters. This allows an attacker who can overwrite certain fields in OpenPGP key packets to perform cross-algorithm attacks, causing a user's software to, for example, misinterpret an ECC private key as being a DSA key. It also allows an attacker to replace the legitimate public parameters with adversarially chosen ones, e.g. allowing them to select the DSA group. We refer to this class of attacks as Key Overwriting (KO) attacks. We provide a detailed analysis of the vulnerability of different OpenPGP libraries to KO attacks, showing in particular that in some cases additional key validation steps performed by libraries that should prevent the attacks in fact allow variant attacks. We also assess the applicability of KO attacks in the context of specific OpenPGP-based applications that reflect different threat models. Finally, we explain how KO attacks can be completely prevented (and the need for key validation obsoleted) at the OpenPGP specification level by expanding the existing proposal of using AEAD schemes for key packet protection to have all the security-relevant public fields included as Associated Data.
Real-Time Operating System (RTOS) has become the main category of embedded systems. It is widely used to support tasks requiring real-time response such as printers and switches. The security of RTOS has been long overlooked as it was running in special environments isolated from attackers. However, with the rapid development of IoT devices, tremendous RTOS devices are connected to the public network. Due to the lack of security mechanisms, these devices are extremely vulnerable to a wide spectrum of attacks. Even worse, the monolithic design of RTOS combines various tasks and services into a single binary, which hinders the current program testing and analysis techniques working on RTOS. In this paper, we propose SFuzz, a novel slice-based fuzzer, to detect security vulnerabilities in RTOS. Our insight is that RTOS usually divides a complicated binary into many separated but single-minded tasks. Each task accomplishes a particular event in a deterministic way and its control flow is usually straightforward and independent. Therefore, we identify such code from the monolithic RTOS binary and synthesize a slice for effective testing. Specifically, SFuzz first identifies functions that handle user input, constructs call graphs that start from callers of these functions, and leverages forward slicing to build the execution tree based on the call graphs and pruning the paths independent of external inputs. Then, it detects and handles roadblocks within the coarse-grain scope that hinder effective fuzzing, such as instructions unrelated to the user input. And then, it conducts coverage-guided fuzzing on these code snippets. Finally, SFuzz leverages forward and backward slicing to track and verify each path constraint and determine whether a bug discovered in the fuzzer is a real vulnerability. SFuzz successfully discovered 77 zero-day bugs on 35 RTOS samples, and 67 of them have been assigned CVE or CNVD IDs. Our empirical evaluation shows that SFuzz outperforms the state-of-the-art tools (e.g., UnicornAFL) on testing RTOS.
In this paper we present MetaEmu, an architecture-agnostic framework geared towards rehosting and security analysis of automotive firmware. MetaEmu improves over existing rehosting environments in two ways: Firstly, it solves the hitherto open-problem of a lack of generic Virtual Execution Environments (VXEs) by synthesizing processor simulators from Ghidra's language definitions. Secondly, MetaEmu can rehost and analyze multiple targets, each of different architecture, simultaneously, and share analysis facts between each target's analysis environment, a technique we call inter-device analysis.
Sharding is an emerging technique to overcome scalability issues on blockchain based public ledgers. Without sharding, every node in the network has to listen to and process all ledger protocol messages. The basic idea of sharding is to parallelize the ledger protocol: the nodes are divided into smaller subsets that each take care of a fraction of the original load by executing lighter instances of the ledger protocol, also called shards. The smaller the shards, the higher the efficiency, as by increasing parallelism there is less overhead in the shard consensus. 041b061a72