Repository logo
About
Deposit
Communities & Collections
All of UWSpace
  • English
  • Čeština
  • Deutsch
  • Español
  • Français
  • Gàidhlig
  • Latviešu
  • Magyar
  • Nederlands
  • Português
  • Português do Brasil
  • Suomi
  • Svenska
  • Türkçe
  • Қазақ
  • বাংলা
  • हिंदी
  • Ελληνικά
Log In
Have you forgotten your password?
  1. Home
  2. Browse by Author

Browsing by Author "Golab, Wojciech"

Filter results by typing the first few letters
Now showing 1 - 16 of 16
  • Results Per Page
  • Sort Options
  • No Thumbnail Available
    Item
    An Implementation of Fake News Prevention by Blockchain and Entropy-based Incentive Mechanism
    (Springer, 2022-08-18) Chen, Chien-Chih; Du, Yuxuan; Peter, Richards; Golab, Wojciech
    Fake news is undoubtedly a significant threat to democratic countries nowadays because existing technologies can quickly and massively produce fake videos, articles, or social media messages based on the rapid development of artificial intelligence and deep learning. Therefore, human assistance is critical if current fake news prevention systems desire to improve accuracy. Given this situation, prior research has proposed to add a quorum, a group of appraisers trusted by users to verify the authenticity of digital content, to the fake news prevention systems. This paper proposes an Entropy-based incentive mechanism to diminish the negative effect of malicious behaviors on a quorum-based fake news prevention system. In order to maintain the Safety and Liveness of our system, we employed Entropy to measure the degree of voting disagreement to determine appropriate rewards and penalties. Moreover, we use Hyperledger Fabric, Schnorr signatures, and human appraisers to implement a practical prototype of a quorum-based fake news prevention system. Then we conduct necessary case analyses and experiments to realize how dishonest participants, crash failures, and scale impact our system. The outcomes of the case analyses and experiments show that our mechanisms are feasible and provide an analytical basis for developing fake news prevention systems. Furthermore, we have added six innovative contributions in this extension work compared to our previous workshop paper in DEVIANCE 2021.
  • Loading...
    Thumbnail Image
    Item
    Building Scalable and Consistent Distributed Databases Under Conflicts
    (University of Waterloo, 2018-04-12) Fan, Hua; Golab, Wojciech
    Distributed databases, which rely on redundant and distributed storage across multiple servers, are able to provide mission-critical data management services at large scale. Parallelism is the key to the scalability of distributed databases, but concurrent queries having conflicts may block or abort each other when strong consistency is enforced using rigorous concurrency control protocols. This thesis studies the techniques of building scalable distributed databases under strong consistency guarantees even in the face of high contention workloads. The techniques proposed in this thesis share a common idea, conflict mitigation, meaning mitigating conflicts by rescheduling operations in the concurrency control in the first place instead of resolving contending conflicts. Using this idea, concurrent queries under conflicts can be executed with high parallelism. This thesis explores this idea on both databases that support serializable ACID (atomic, consistency, isolation, durability) transactions, and eventually consistent NoSQL systems. First, the epoch-based concurrency control (ECC) technique is proposed in ALOHA-KV, a new distributed key-value store that supports high performance read-only and write-only distributed transactions. ECC demonstrates that concurrent serializable distributed transactions can be processed in parallel with low overhead even under high contention. With ECC, a new atomic commitment protocol is developed that only requires amortized one round trip for a distributed write-only transaction to commit in the absence of failures. Second, a novel paradigm of serializable distributed transaction processing is developed to extend ECC with read-write transaction processing support. This paradigm uses a newly proposed database operator, functors, which is a placeholder for the value of a key, which can be computed asynchronously in parallel with other functor computations of the same or other transactions. Functor-enabled ECC achieves more fine-grained concurrency control than transaction level concurrency control, and it never aborts transactions due to read-write or write-write conflicts but allows transactions to fail due to logic errors or constraint violations while guaranteeing serializability. Lastly, this thesis explores consistency in the eventually consistent system, Apache Cassandra, for an investigation of the consistency violation, referred to as "consistency spikes". This investigation shows that the consistency spikes exhibited by Cassandra are strongly correlated with garbage collection, particularly the "stop-the-world" phase in the Java virtual machine. Thus, delaying read operations arti cially at servers immediately after a garbage collection pause can virtually eliminate these spikes. All together, these techniques allow distributed databases to provide scalable and consistent storage service.
  • Loading...
    Thumbnail Image
    Item
    Computation Reduction for Angle of Arrival Estimation Based on Interferometer Principle
    (University of Waterloo, 2017-09-28) Chandail, Mukul; Agnew, Gordon; Golab, Wojciech
    Advancement in wireless technology and the oncoming of the Internet of Things (IoT) marked an incredible growth in the wireless connectivity, ultimately concluding to a major expansion in the mobile electronics industry. Today, around 3.1 billion users are reported being connected to the internet, along with 16.3 billion mobile electronic devices. The increasing connectivity has lead to an increase in demand for mobile services, consequently, increasing demand for location services and mobility analytics. The most common location tracking or direction finding devices are found in the form of Global Positioning System (GPS) which provides location data for a client device using satellites-based lateration techniques. However, the use of the GPS is fairly limited to large distances and often tend to fail when smaller distances are concerned. This thesis aims to dive into the study of different direction finding algorithms based on angle of arrival estimation specifically pertaining to the indoor location tracking and navigation, also known as hyperlocation. The thesis will go over the main elements used in direction finding systems while looking at some of the present research done in this respective field of interest. Afterwards, the thesis will focus on a specific angle of arrival estimation algorithm which is widely being used for hyerplocation solutions and propose an alteration in the algorithm in order to achieve a faster runtime performance on weaker processors. A comparison between the accuracies will be made between the original algorithm and the suggested solution, followed by a runtime comparison on different processing units.
  • No Thumbnail Available
    Item
    Deadline-Aware Cost Optimization for Spark
    (IEEE, 2019-03-29) Sidhanta, Subhajit; Golab, Wojciech; Mukhopadhyay, Supratik
    We present OptEx, a closed-form model of job execution on Apache Spark, a popular parallel processing engine. To the best of our knowledge, OptEx is the first work that analytically models job completion time on Spark. The model can be used to estimate the completion time of a given Spark job on a cloud, with respect to the size of the input dataset, the number of iterations, and the number of nodes comprising the underlying cluster. Experimental results demonstrate that OptEx yields a mean relative error of 6 percent in estimating the job completion time. Furthermore, the model can be applied for estimating the cost-optimal cluster composition for running a given Spark job on a cloud under a completion deadline specified in the SLO (i.e., Service Level Objective). We show experimentally that OptEx is able to correctly estimate the required cluster composition for running a given Spark job under a given SLO deadline with an accuracy of 98 percent. We also provide a tool which can classify Spark jobs into job categories based on bisimilarity analysis on lineage graphs collected from the given jobs.
  • Loading...
    Thumbnail Image
    Item
    Designing an Incentive-compatible Reward Scheme for Algorand
    (University of Waterloo, 2022-06-21) Liao, Maizi; Zahedi, Seyed Majid; Golab, Wojciech
    Founded in 2017, Algorand is the first carbon-negative blockchain protocol inspired by proof of stake. Algorand uses a Byzantine agreement protocol to add new blocks to the blockchain. The protocol can tolerate malicious users as long as a supermajority of the stake is controlled by non-malicious users. The protocol achieves about 100x more throughput that Bitcoin and can be easily scaled to millions of nodes. Despite its impressive features, Algorand lacks a reward-distribution scheme to incentivize nodes to participate in the protocol. In this work, we study the incentive issue in Algorand through the lens of game theory. We model the Algorand protocol as a Bayesian game and propose a novel reward scheme to address the incentive issue in Algorand. Through rigorous analysis, we derive necessary conditions to ensure that participation in the protocol is a Bayesian Nash equilibrium even in the presence of a malicious adversary. In addition, we propose a referral mechanism to ensure that malicious nodes cannot earn more rewards in expectation compared to non-malicious nodes.
  • Loading...
    Thumbnail Image
    Item
    Detectable Data Structures for Persistent Memory
    (University of Waterloo, 2021-05-14) Li, Nan; Golab, Wojciech
    Persistent memory is a byte-addressable and durable storage medium that provides both performance benefits of main memory and durability of secondary storage. It is possible for a data structure to recover near-instantly after a system failure by accessing recovery data directly in persistent memory through memory operations. A variety of researches have been working on building persistent data structures for persistent memory. Some persistent data structures are said to be detectable, which means they can tell whether the last operation invoked before crash took effect or not. In this thesis, I propose an abstract data type DetectableT with its sequential specification, which can be composed with a base data type to make the base data type detectable. To show how to design detectable data structures based on DetectableT, a detectable lock-free queue algorithm called Detectable Queue, which composes DetectableT with Queue, is presented. One difficulty in the implementation of Detectable Queue is to get the result of a compare-and-swap (CAS) operation after a crash since the result of CAS is stored in volatile CPU registers. To help detectable data structures handle this common problem, I provide a synchronization primitive called CASWithEffect, which executes a CAS operation and stores the result into persistent memory atomically using private variables. With CASWithEffect, another detectable queue algorithm called CASWithEfffect Queue is provided as a substitute for Detectable Queue with a simpler design. Regarding correctness, I prove that both Detectable Queue and CASWithEffect satisfy strict linearizability. The data structure implementations are evaluated using Intel Optane Persistent memory. I compare both Detectable Queue and CasWithEffect queue with another queue algorithm - Log Queue. The result shows that Detectable Queue has the best performance.
  • Loading...
    Thumbnail Image
    Item
    EA-PHT-HPR: Designing Scalable Data Structures for Persistent Memory
    (University of Waterloo, 2020-08-28) Cepeda, Diego; Golab, Wojciech
    Volatile memory has dominated the realm of main memory on servers and computers for a long time. In 2019, Intel released to the public the Optane data center persistent memory modules (DCPMM). These devices offer the capacity and persistence of block devices while providing the byte addressability and low latency of DRAM devices. The introduction of this technology now allows programmers to develop data structures that can remain in main memory across crashes and power failures. Implementing recoverable code is not an easy task, and adds a new degree of complexity to how we develop and prove the correctness of code. This thesis explores the different approaches that have been taken for the development of persistent data structures, specifically for hash tables. The work presents an iterative process for the development of a persistent hash table. The proposed designs are based on a previously implemented DRAM design. We intend for the design of the hash table to remain similar to its original DRAM design while achieving high performance and scalability in persistent memory. Through each step of the iterative process, the proposed design's weak points are identified, and the implementations are compared to current state-of-the-art persistent hash tables. The final proposed design consists of a hybrid hash table implementation that achieves up to 47% higher performance in write-heavy workloads, and up to 19% higher performance in read-only workloads in comparison to the dynamic and scalable hashing (DASH) implementation, which currently is one of the fastest hash tables for persistent memory. As well, to reduce the latency of a full table resize operation, the proposed design incorporates a new full table resize mechanism that takes advantage of parallelization.
  • Loading...
    Thumbnail Image
    Item
    Energy Efficient Energy Analytics
    (University of Waterloo, 2017-05-19) De, Sagnik; Golab, Wojciech
    Smart meters allow for hourly data collection related to customer's power consumption. However this results in thousands of data points, which hides broader trends in power consumption and makes it difficult for energy suppliers to make decisions regards to a specific customer or to large number of customers. Since data without analysis is useless, various algorithms have been proposed to lower the dimensionality of data, discover trends (eg. regression), study relationships between different types (eg. temperature and power data) of collected data, summarize data (e.g. histogram). This allows for easy consumption by the end user. The smart meter data is very compute intensive to process as there are a large number of houses and each house has the data collected over a few years. To speed up the smart meter data analysis, computer clusters have been used. Ironically, these clusters consume a lot of power. Studies have shown that about 10 % of power is consumed by the computing infrastructure. In this thesis a GPU will be used to perform analysis of smart meter data and it will be compared to a baseline CPU implementation. It will also show that GPUs are not only faster than the CPU, but they are also more power efficient.
  • No Thumbnail Available
    Item
    Gossip-Based Visibility Control for High Performance Geo-Distributed Transactions
    (Springer Nature, 2021-01) Fan, Hua; Golab, Wojciech
    Providing ACID transactions under conflicts across globally distributed data is the Everest of transaction processing protocols. Transaction processing in this scenario is particularly costly due to the high latency of cross-continent network links, which inflates concurrency control and data replication overheads. To mitigate the problem, we introduce Ocean Vista—a novel distributed protocol that guarantees strict serializability. We observe that concurrency control and replication address different aspects of resolving the visibility of transactions, and we address both concerns using a multi-version protocol that tracks visibility using version watermarks and arrives at correct visibility decisions using efficient gossip. Gossiping the watermarks enables asynchronous transaction processing and acknowledging transaction visibility in batches in the concurrency control and replication protocols, which improves efficiency under high cross-data center network delays. In particular, Ocean Vista can access conflicting transactions in parallel and supports efficient write-quorum/read-one access using one round trip in the common case. We demonstrate experimentally in a multi-data center cloud environment that our design outperforms a leading distributed transaction processing engine (TAPIR) more than tenfold in terms of peak throughput, albeit at the cost of additional latency for gossip and a more restricted transaction model. The latency penalty is generally bounded by one wide area network (WAN) round trip time (RTT), and in the best case (i.e., under light load) our system nearly breaks even with TAPIR by committing transactions in around one WAN RTT.
  • No Thumbnail Available
    Item
    Recoverable mutual exclusion
    (Springer Nature, 2019-11-05) Golab, Wojciech; Ramaraju, Aditya
    Mutex locks have traditionally been the most common mechanism for protecting shared data structures in concurrent programs. However, the robustness of such locks against process failures has not been studied thoroughly. The vast majority of mutex algorithms are designed around the assumption that processes are reliable, meaning that a process may not fail while executing the lock acquisition and release code, or while inside the critical section. If such a failure does occur, then the liveness properties of a conventional mutex lock may cease to hold until the application or operating system intervenes by cleaning up the internal structure of the lock. For example, a process that is attempting to acquire an otherwise starvation-free mutex may be blocked forever waiting for a failed process to release the critical section. Adding to the difficulty, if the failed process recovers and attempts to acquire the same mutex again without appropriate cleanup, then the mutex may become corrupted to the point where it loses safety, notably the mutual exclusion property. We address this challenge by formalizing the problem of recoverable mutual exclusion, and proposing several solutions that vary both in their assumptions regarding hardware support for synchronization, and in their efficiency. Compared to known solutions, our algorithms are more robust as they do not restrict where or when a process may crash, and provide stricter guarantees in terms of efficiency, which we define in terms of remote memory references.
  • Loading...
    Thumbnail Image
    Item
    Recoverable Mutual Exclusion in Detectable Lock-Based Data Structures
    (University of Waterloo, 2025-01-16) Fahmy, Ahmed; Golab, Wojciech
    Persistent memory (PM) is an emerging technology that offers the speed of DRAM combined with the persistence of traditional storage. This advancement provides unique opportunities and challenges for designing data structures that remain consistent and recoverable after system failures. This thesis presents significant advancements in the design and implementation of recoverable synchronization algorithms and data structures optimized for PM. The research focuses on addressing the challenges of recoverable mutual exclusion (RME) and the development of detectable lock-based data structures, offering innovative solutions that enhance performance and reliability in concurrent systems. A major contribution of this work is the introduction of the Recoverable Filter (RF) lock, a novel approach that enhances RME lock performance in the Non-Uniform Memory Access (NUMA) multi-processor architecture, where memory access time depends on the memory location relative to the processor. Such a technique transforms NUMA-oblivious RME locks into NUMA-aware ones without requiring modifications to the underlying locks. This solution tackles the dual challenges of recoverability and NUMA-awareness, a combination not previously addressed in the literature on RME locks. Comprehensive empirical evaluations using prominent RME algorithms, specifically GH and JJJ, demonstrate that the RF lock significantly boosts performance by up to 45% in multi-socket configurations, while maintaining minimal overhead in single-socket setups. These findings underscore the effectiveness of the RF lock in leveraging memory locality to improve efficiency. Additionally, this thesis introduces DULL, a Detectable Unrolled Lock-based Linked List, designed for persistent memory environments. DULL distinguishes itself as the fastest detectable lock-based linked list and the first to achieve strict-linearizability. The implementation of DULL utilizes volatile RME locks to address the intricate challenge of preserving essential lock properties, such as mutual exclusion and deadlock-freedom, across system failures. Performance evaluations reveal that DULL outperforms existing solutions in update-intensive scenarios and maintains scalability even under high processor subscription levels. By exploring the challenges of designing concurrent recoverable algorithms with incorporated locks, this thesis makes substantial contributions to the field of concurrent data structures and synchronization algorithms. The innovative solutions presented herein lay a robust foundation for future research and practical applications, enhancing the reliability and performance of concurrent systems in persistent memory environments.
  • Loading...
    Thumbnail Image
    Item
    A Scalable Recoverable Skip List for Persistent Memory on NUMA Machines
    (University of Waterloo, 2021-10-20) Chowdhury, Sakib; Golab, Wojciech
    Interest in recoverable, persistent-memory-resident (PMEM-resident) data structures is growing as availability of Intel Optane Data Center Persistent Memory increases. An interesting use case for in-memory, recoverable data structures is for database indexes, which need high availability and reliability. Skip lists are a data structure particularly well-suited for usage as a fully PMEM-resident index, due to their reduced amount of writes from their probabilistic balancing in comparison to other index data structures like B-trees. The Untitled Persistent Skip List (UPSkipList) is a PMEM-resident recoverable skip list derived from Herlihy et al.'s lock-free skip list algorithm. It is developed using a new conversion technique that extends the RECIPE algorithm by Lee et al. to work on lock-free algorithms with non-blocking writes and no inherent recovery mechanism. It does this by tracking the current time period between two failures, or failure-free epoch, and recording the current epoch in nodes when they are being modified. This way, an observing thread can determine if an inconsistent node is being modified in this epoch or was being modified in a previous epoch and now is in need of recovery. The algorithm is also extended to support concurrent data node splitting to improve performance, which is easily made recoverable using the extension to RECIPE allowing detection of incomplete node splits. UPSkipList also supports cache-efficient NUMA awareness of dynamically allocated objects using an extension to the Region-ID in Value (RIV) method by Chen et al. By using additional bits after the most significant bits in an RIV pointer to indicate the object in which the remaining bits are referenced relative to, chunks of memory can by dynamically allocated to UPSkipList from multiple shared pools without the need for fat pointers, which reduce cache efficiency by halving the number of pointers that can fit in a cache line. This combines the benefits of both the RIV method and the dynamic memory allocation method built into the Persistent Memory Development Kit (PMDK), improving both performance and practicality. Additionally, memory manually managed within a chunk using the RIV method can have its recovery after a crash deferred to the next attempted allocation by a thread sharing the ID with the thread responsible for the allocation of the memory being recovered, reducing recovery time for large pools with many threads active during the time of a crash. Comparison was done against the BzTree of Arulraj et al., as implemented by Lersch et al., which has non-blocking, non-repairing writes implemented using the persistent multi-word CAS (PMwCAS) primitive by Wang et al., and a transactional recoverable skip list implemented using the PMDK. Tested with the Yahoo Cloud Serving Benchmark (YCSB), UPSkipList achieves better performance in write-heavy workloads at high levels of concurrency than BzTree, and outperforms the PMDK-based skip list, due to the PMDK-based skip list's higher average latency. Using the extended RIV pointers to dynamically allocate memory resulted in a 40% performance increase over using the PMDK's fat pointers. The impact of NUMA awareness using multiple pools of memory compared with striping a single pool across multiple nodes was found to only be a 5.6% decrease in performance. Finally, recovery time of UPSkipList was found to be comparable to the PMDK-based skip list, and 9 times faster than BzTree with 500K descriptors in its PMwCAS pool. Correctness of UPSkipList and its conversion and recovery techniques were tested using black-box recoverable linearizability analysis, which found UPSkipList to be free of strict linearizability errors across 30 trials.
  • Loading...
    Thumbnail Image
    Item
    ShallowForest: Optimizing All-to-All Data Transmission in WANs
    (University of Waterloo, 2019-05-23) Tan, Hao; Golab, Wojciech; Srinivasan, Keshav
    All-to-all data transmission is a typical data transmission pattern in both consensus protocols and blockchain systems. Developing an optimization scheme that provides high throughput and low latency data transmission can significantly benefit the performance of those systems. This thesis investigates the problem of optimizing all-to-all data transmission in a wide area network (WAN) using overlay multicast. I first prove that in a congestion-free core network model, using shallow tree overlays with height up to two is sufficient for all-to-all data transmission to achieve the optimal throughput allowed by the available network resources. Based on this finding, I build ShallowForest, a data plane optimization for consensus protocols and blockchain systems. The goal of ShallowForest is to improve consensus protocols' resilience to skewed client load distribution. Experiments with skewed client load across replicas in the Amazon cloud demonstrate that ShallowForest can improve the commit throughput of the EPaxos consensus protocol by up to 100% with up to 60% reduction in commit latency
  • Loading...
    Thumbnail Image
    Item
    Snapshotting Mechanisms for Persistent Memory-Mapped Files
    (University of Waterloo, 2023-09-21) Moridi, Mohammad; Golab, Wojciech
    In this research, we explore ways to enhance the reliability of persistent memory systems. Using Montage (ICPP'21) as our reference model, we identify areas of potential improvement, especially concerning the risk of data loss in certain failure scenarios. Our investigation led us to focus on the concept of snapshotting and its role in system resilience. We delve into various consistency models for snapshotting mechanisms and introduce a new definition for snapshotting consistency known as Buffered-Durable Consistency. Montage offers impressive resilience against system-wide crash failures. However, we perceive opportunities for further fortification, specifically when persistent memory failures occur, which can lead to substantial data loss. Addressing this challenge, we propose two new snapshotting mechanisms - stop-the-world and online snapshotting - for memory-mapped files. These mechanisms selectively replicate only those data portions that have been modified since the last snapshot, thus significantly reducing the volume of data copying during snapshot operations. In order to ensure data consistency and optimize snapshotting, we introduce modifications to both Montage and its allocator. This includes a parallel chunk-copying strategy for the stop-the-world snapshotting implementation. Additionally, our online snapshotting mechanism allows updates to chunks not being replicated by the snapshotter, which increases system responsiveness. We have also developed an algorithm that, when snapshotting is not in progress, disables the reader locks, reducing the overhead associated with such locks and further enhancing performance. To demonstrate the effectiveness of our approach, we present an experimental analysis showing throughput and latency for various scenarios. Overall, our work not only heightens the fault tolerance capabilities of Montage, but also offers critical insights and potential directions for future research and optimization in the field of persistent memory, especially for memory-mapped file replication.
  • Loading...
    Thumbnail Image
    Item
    Tuning and Predicting Consistency in Distributed Storage Systems
    (University of Waterloo, 2017-09-27) Chatterjee, Shankha; Golab, Wojciech
    Distributed storage systems are constrained by the finite speed of propagation of information. The CAP (which stands for consistency, availability, and partition tolerance) theorem states that in the presence of network partitions, a choice has to be made in between availability and consistency. However, even in the absence of failures, a trade-off between consistency and latency of operations (reads and writes) exists. Eventually consistent storage systems often sacrifice consistency for high availability and low latencies. One way to achieve fine-tuning in the consistency-latency trade-off space is to inject artificial delays to each storage operation. This thesis describes an adaptive tuning framework that is able to calculate the values of artificial delay to be injected to each storage operation to meet a specific target consistency. The framework is able to adapt nimbly to environmental changes in the storage system to maintain target consistency levels. It consists of a feedback loop which uses a technique called spectral shifting at each iteration to calculate the target value of artificial delay from a history of operations. The tuning framework is able to converge to the target value of artificial delay much faster than the state-of-art solution. This thesis also presents a probabilistic analysis of inconsistencies in eventually consistent distributed storage systems operating under weak (read one, write one) consistency settings. The analysis takes into account symmetrical (same for reads and writes) artificial delays which enable consistency-latency tuning. A mathematical formula for the percentage of inconsistent operations is derived from other environmental parameters pertaining to the storage system. The formula's predictions for the proportion of inconsistent operations match observations of the same from a stochastic simulator of the storage system running 10^6 operations (per experiment), and from a widely used key-value store (Apache Cassandra) closely.
  • Loading...
    Thumbnail Image
    Item
    Understanding Scalability Issues in Sharded Blockchains
    (University of Waterloo, 2020-12-11) Nguyen, Anh Duong; Golab, Wojciech
    Since the release of Bitcoin in 2008, cryptocurrencies have attracted attention from academia, government, and enterprises. Blockchain, the backbone ledger in many cryptocurrencies, has shown its potential to be a data structure carrying information over the network securely without the need for a centralized trust party. In this thesis, I delve into the consensus protocols used in permissioned blockchains and analyze the sharding technique that aims to improve the scalability in blockchain systems. I discuss a permissioned sharded blockchain that I use to examine different methods to interleave blocks, referred to as strong temporal coupling and weak temporal coupling. I provide empirical experiments to show the roles of lightweight nodes in solving the scalability issues in sharded blockchain systems. The results suggest that the weak temporal coupling method performs worse than the strong temporal coupling method and is more susceptible to an increase in network latency. The results also show the importance of separating the roles of nodes and adding lightweight nodes to improve the performance and scalability of sharded blockchain systems.

DSpace software copyright © 2002-2025 LYRASIS

  • Privacy policy
  • End User Agreement
  • Send Feedback