418.61K
1.46M
2024-05-10 10:00:00 ~ 2024-06-11 11:30:00
2024-06-12 04:00:00
Total supply800.00M
Resources
Introduction
IO.NET is the world’s largest decentralized AI computing network that allows machine learning engineers to access scalable distributed clusters at a small fraction of the cost of comparable centralized services. io.net is uniquely capable of creating clusters of tens of thousands of GPUs, whether they are co-located or geo-distributed, while maintaining low latency for deployers.
License: arXiv.org perpetual non-exclusive license arXiv:2501.05262v2 [cs.NI] 14 Jan 2025 by Isaac Zhang Ryan Zarick Daniel Wong Thomas Kim Bryan Pellegrino Mignon Li Kelvin Wong LayerZero Labs Abstract Quick Merkle Database (QMDB) addresses longstanding bottlenecks in blockchain state management by integrating key-value (KV) and Merkle tree storage into a single unified architecture. QMDB delivers a significant throughput improvement over existing architectures, achieving up to 6× over the widely used RocksDB and 8× over NOMT, a leading verifiable database. Its novel append-only twig-based design enables one SSD read per state access, O(1) IOs for updates, and in-memory Merkleization on a memory footprint as small as 2.3 bytes per entry enabling it to run on even modest consumer-grade PCs. QMDB scales seamlessly across both commodity and enterprise hardware, achieving up to 2.28 million state updates per second. This performance enables support for 1 million token transfers per second (TPS), marking QMDB as the first solution achieving such a milestone. QMDB has been benchmarked with workloads exceeding 15 billion entries (10× Ethereum’s 2024 state) and has proven the capacity to scale to 280 billion entries on a single server. Furthermore, QMDB introduces historical proofs, unlocking the ability to query its blockchain’s historical state at the latest block. QMDB not only meets the demands of current blockchains but also provides a robust foundation for building scalable, efficient, and verifiable decentralized applications across diverse use cases. †† 1Introduction Updating, managing, and proving world state are key bottlenecks facing the execution layer in modern blockchains. Within the execution layer, the storage layer, in particular, has traditionally traded off performance (throughput) and decentralization (capital and infrastructure barriers to participation). Blockchains typically implement state management using an Authenticated Data Structure (ADS) such as a Merkle Patricia Trie (MPT). Unfortunately, typical MPT-based ADSes incur a high amount of write amplification (WA) with many costly random writes for each state update, which requires storing the entire structure in DRAM to avoid getting bottlenecked by the SSD. As a result, the performance and scaling of blockchains is I/O-bound, and the key to unlocking higher performance with larger datasets is to optimize the use of SSD IOPS more efficiently and reduce WA. We present Quick Merkle Database (QMDB), a resource-efficient SSD-optimized ADS with in-memory Merkleization that implements a superset of the app-level features of existing RocksDB-backed MPT ADSes with 6× throughput on large datasets. Qmdb performs state reads with a single SSD read, state updates with O(1) IO, and performs Merkleization fully in-memory with no SSD reads or writes. These operations are theoretically optimal regarding disk IO complexity. Additionally, QMDB has a DRAM footprint small enough to run on consumer-grade PCs. Blockchain state storage is typically handled by an Authenticated Data Structure (ADS) which acts as a proof layer (e.g. Merkle Patricia Trie (MPT)) in combination with a physical storage layer. The proof layer efficiently generates inclusion and exclusion proofs against the world state, while the physical storage layer stores the actual world state keys and values. In many existing blockchains, these layers are each stored in a separate general-purpose key-value store such as RocksDB, resulting in duplicated data and general inefficiency. Storing a MPT (O(logN) insertion) in a general-purpose key-value store (O(logN) insertion) results in each state update incurring O((logN)2) SSD IOs. QMDB eliminates this inefficiency by unifying the world state and Merkle tree storage, persisting all state updates in an append-only log, and eliminating all SSD reads and writes from Merkleization. By grouping updates into fixed-size immutable subtrees called twigs, QMDB can Merkleize state updates without reading or writing any world state; this essentially compresses the Merkle tree by several orders of magnitude, allowing it to be stored in a modest amount of DRAM. QMDB leverages typical blockchain workload characteristics to eliminate features commonly found in KVDBs—such as key iterations—thereby reducing performance bottlenecks. These optimizations enable QMDB to achieve 6× throughput compared to RocksDB, a general-purpose key-value database that does not perform Merkleization. We also show that QMDB outperforms a prerelease version of NOMT, a state-of-the-art verifiable database, by up to 8×. We validate QMDB’s scaling characteristics with experiments up to 15 billion entries (10X of Ethereum’s 2024 state size) and show it scales on both consumer-grade and enterprise-grade hardware. QMDB is a transformative improvement for blockchain developers, addressing today’s storage challenges and unlocking new possibilities for blockchain applications. In particular: 1) QMDB can serve massive workloads with the same amount of DRAM, allowing blockchains to handle more users and transactions; 2) Based on its low memory overhead per entry, QMDB can theoretically scale up to 280 billion entries on a single server, far exceeding any blockchain’s requirements today; and 3) QMDB can scale down to consumer-grade hardware, decreasing barriers to participation and improving decentralization. Figure 1:Entries are inserted sequentially into the leaves of the Fresh twig, and all leaves have the same depth. The twig eventually transitions into the Full state. As Entries are deleted, Full twigs become Inactive, then transition to Pruned. Upper nodes are recursively pruned after both of their children are pruned. 2Background We explain the design of other verifiable databases and related data structures, including prior work reducing write amplification of verifiable databases [ 19 , 13 ]. MPTs combine the efficient proof generation of the Merkle tree with the fast lookups of the Patricia trie and are a common choice for ADS on today’s blockchains [ 23 ]. In a database of N items, updating a single state entry in an MPT has a time complexity of O(log(N)) [ 17 ]. However, MPT and other existing trie-based ADSes suffer from large proofs and a dependency on the client having a large amount of physical memory to avoid excessive random SSD reads. At the same time, MPTs are not suitable for storage on flash storage, as the randomly distributed update-heavy workload results in high WA. To top it off, the worst-case size for inclusion and exclusion proofs can be quite large. These factors result in Merkleization becoming a significant bottleneck that limits the overall throughput of the execution layer and the blockchain. AVL tree based ADSes are popular alternatives to MPTs, as they achieve faster updates, lookups, and proof generation due to the self-balancing AVL tree. The AVL tree is path-dependent, unlike the MPT, meaning its state root is influenced by the specific sequence of state change actions. AVL trees provide a marginal performance increase over MPTs in the average case, but still suffer from O(log N) tree nodes modifications per state update. LVMT [ 13 ] proposes a layered storage model to reduce the space and complexity of maintaining authenticated blockchain states. By partitioning the state into multiple segments and using cryptographic accumulators, it compresses less frequently accessed data while preserving verifiability. Proof generation becomes simpler, as intermediate accumulators shorten authentication paths. However, integrating multiple layers increases system complexity and demands careful configuration—suboptimal settings can lead to poor performance. Furthermore, LVMT depends on well-optimized cryptographic primitives. MoltDB [ 14 ] improves on existing two-layer MPT designs by segregating states by recency and coupling that with a compaction process. It reduces I/O and shows increased throughput of 30% over Geth. NOMT is a state-of-the-art ADS that uses a flash-optimized layout for a binary Merkle tree with compressed metadata, overcoming some limitations of existing MPT-based ADS implementations. NOMT implements an array of improvements including tree arity, flash native layout, a write-ahead log, and caching. This design results in better performance than existing solutions and has garnered interest in the space. However, NOMT remains an implementation-level optimization of MPT, offering only constant-factor reductions in disk I/O. It still faces inherent asymptotic limitations and write amplification issues. Additionally, it is affected by the key sparsity problem commonly observed in trie-based structures. Merkle Mountain Range (MMR) [ 22 ] enable compact inclusion proofs and are append-only, which makes the IO pattern for updating state conducive to efficient usage of SSD IOPS. Each MMR is a list of Merkle subtrees (peaks), and peaks of equal size are merged as new records are appended. MMRs are not suitable for live state management, as they cannot natively handle deletes, updates, lookups by key, and exclusion proof generation. As a result, MMRs have generally found success in their use for historical data management [ 18 ] where the key is just an index. Acceleration of Merkle tree computation has been an area of active research, with several proposed techniques such as caching [ 8 , 5 ], optimizing subtrees [ 4 ], and using specialized hardware [ 12 , 6 ]. These improvements are orthogonal to QMDB and could be applied to QMDB to further improve its performance and efficiency. Verifiable ledger databases are systems that allow users to verify that a log is indeed append-only, of which blockchains are a subset. A common approach to implementing a verifiable ledger database is deferred verification [ 25 , 24 , 3 ]. GlassDB [ 25 ] uses a POS-tree (a Merkle tree variant) as an ADS for efficient proofs. Amazon’s QLDB [ 2 ], Azure’s SQLLedger [ 3 ], and Alibaba’s LedgerDB [ 24 ] are commercially available verifiable databases that use Merkle trees (or variants) internally to provide transparency logs. VeritasDB [ 21 ] uses trusted hardware (SGX) to aid verification. The key difference between these databases and QMDB is that QMDB is optimized for frequent state updates and real-time verification of the current state (as opposed to verification of historical logs and deferred verification). Field Description Purpose Id Unique identifier (e.g., nonce) Prove key inclusion Key Application key Identify the key Value Current state value the key Serve application logic NextKey Lexicographic successor of Key Prove key exclusion OldId Id of the Entry previously containing Key Prove historical inclusion / exclusion OldNextKeyId Id of the Entry previously containing NextKey Prove key deletion Version Block height and transaction index Query state by block height Table 1:Fields in a QMDB entry.ID and Version are 8 bytes. Key has up to28bytes and Value can hold up to224bytes. 3QMDB Design QMDB is architected as a binary Merkle tree illustrated in Figure 1 . At the top is a single global root that connects a set of shard roots, each of which represents the subtree of the world state that is managed by an independent QMDB shard. The shard root itself is connected to a set of upper nodes, which, in turn, are connected to fixed-size subtrees called twigs; each of these twigs has a root that stores the Merkle hash of the subtree and a bitmap called ActiveBits to track which entries are part of the most current world state. The twig root is determined by the sequence of entries, making it path-dependent. Entries (the twig’s leaves) are append-only and immutable, making it unnecessary to read or write the entry root during Merkleization; this results in QMDB only ever reading/writing the global root, shard roots, upper nodes, and twig roots during Merkleization. The twig essentially compresses the actual state keys and values into a single hash and bitmap, making the data required for Merkleization small enough to fit in a small amount of DRAM rather than being stored on SSD. In this section we begin by explaining the underlying storage primitives used to organize state data (Section 3.1 ), followed by a discussion of the indexer in Section 3.2 . In Section 3.3 we describe the high-level CRUD interface exported by QMDB to clients. In Section 3.4 we describe how the storage backend and indexer facilitate generation of state proofs, and discuss how these state proofs can be statelessly validated. Finally, in Section 3.5 we explain how QMDB takes advantage of additional optimizations such as sharding and pipelining to scale throughput via improved parallelism. 3.1Entries and Twigs The entry (Table 1 ) is the primitive data structure in QMDB, encapsulating key-value pairs with the metadata required for efficient proof generation. Entries can be extended to support features such as historical state proof generation (Section 3.4 ). QMDB keys entries by the hash of the application-level key, resulting in improved load balancing via uniform key distribution across shards (Section 3.5 ) State Description Entries Twig Root Fresh Entries ≤2047 DRAM DRAM Full 2048 Entries SSD DRAM Inactive 0 active Entries Deleted SSD Pruned Subtree deleted Deleted Deleted Table 2:As twigs progress through their lifecycle, their footprint in DRAM gets smaller. An inactive twig has 99.9% smaller memory footprint than a full twig. Twigs are subtrees within QMDB’s Merkle Tree; each twig has a fixed depth, by extension a fixed number of entries stored in the leaf nodes of the same depth (2048 in our implementation). A set of upper nodes connects all twigs to a single shard root, with null nodes to represent uninitialized values; these upper nodes are immutable once all their descendant entries have been initialized. In addition to the actual Merkle subtree, Twigs also store the Merkle hash of their root node and ActiveBits, a bitmap that describes whether each contained entry contains state that has not been overwritten or deleted. The twig essentially compresses the information required to Merkleize 2048 entries and their upper nodes (≥256kb) into a single 32-byte hash and a 256-byte bitmap (99.9% compression). This compression is the key to enabling fully in-memory Merkleization in QMDB. Fresh twigs reside completely in DRAM, and entries are sequentially inserted into its leaf nodes. Once a twig reaches 2048 entries, its contents are asynchronously flushed to SSD in a large sequential write and deleted from DRAM, maximizing SSD utilization and minimizing DRAM footprint. Each twig follows a lifecycle of 4 states: Fresh, Full, Inactive, and Pruned (Table 2 ). An example of the layout of QMDB’s state tree is presented in Figure 1 There is exactly one fresh twig per shard, and entries are always appended to the fresh twig. After all entries in the twig are marked inactive as a result of update and delete operations, the twig transitions into the inactive state before eventually being pruned and replaced by the Merkle hash of the root. Upper nodes that contain only pruned twigs are recursively pruned, further reducing the memory footprint of QMDB; a dedicated garbage collection thread duplicates old valid entries into the fresh twig, reducing fragmentation and allowing larger subtrees to be pruned. In theory, once the entire subtree originating at a child of the shard root is pruned, the root itself can be pruned to reduce the depth of the tree by one. The grouping of entries into twigs reduces the DRAM footprint of QMDB to the degree that all nodes affected by Merkleization can be stored in a small amount of DRAM. In a hypothetical scenario with 230 entries (approx. 1 billion), the system must keep at most 219 (2302048) 288-byte (32-byte twig root hash 2048-bit ActiveBits bitmap) full twigs, 1 fresh twig and 219−1 32-byte (node hash) upper nodes totaling around 160 megabytes. In practice, the majority of the 219 twigs will be pruned, resulting in the average size being much smaller. Inactive and Pruned twigs cannot be modified, and thus do not require further Merkleization. Fresh and Full twigs must be Merkleized every time the ActiveBits bitmap is changed, and Fresh twigs must additionally be Merkleized every time an entry is added. The upper nodes of the Merkle tree are recomputed on startup and are never persisted to SSD–this recomputation requires reading all twig hashes from SSD and performing 2 hashes per twig, and can be completed in a matter of milliseconds for the previous example of 1 billion entries. QMDB stores an entry every time state is modified, making the state tree grow proportionally to the number of state modifications. To combat this tree growth, a dedicated compaction worker periodically compacts QMDB’s state tree by removing and re-appending old entries to the fresh twig, accelerating the progression of the twig lifecycle and allowing more subtrees to be pruned. The compaction logic must be deterministic when used in a consensus system or for stateless validation. The current implementation ensures that the active entry ratio per shard remains above a predefined threshold, triggering compression during updates and insertions. QMDB’s Merkle proof size and proof generation complexity grow proportionally to log(U) of the number of state updates (U) rather than the number of unique keys (K) due to its append-only nature. However, the ratio of U to K remains small enough that the order-of-magnitude improvement in Merkleization performance dominates the small additional cost. Assuming 10,000 transactions per second and an average of 5 KV updates per transaction, the tree depth after one year will be at most 41 (log2(10000∗5∗3600∗24∗365)); however, in practice the actual depth will be much shallower due to pruning of overwritten subtrees and garbage collection. In addition, ZK-proofs can be used to compress the proof witness data which drastically reduces proof verification cost, avoiding end-to-end bottlenecks in the proof size. 3.2Indexer The indexer maps the application-level keys to their respective entries, enabling QMDB’s CRUD interface. To support efficient insertion and deletion of entries (Section 3.3 ), the indexer must support ordered key iteration. The indexer can be freely swapped for different implementations depending on specific application needs, but we expect that QMDB’s default in-memory indexer will meet the resource requirements of the majority of use cases. This modularity potentially enables optimizations to increase the performance or memory efficiency of the indexer such as those found in systems such as SILT [ 15 ] or MICA [ 16 ]. QMDB’s default indexer consumes approximately 15.4 bytes of DRAM per key and serves key lookups in-memory to minimize SSD I/Os. This efficiency is achieved by using only the 9 most significant bytes of each key, which slightly increases the likelihood of key collisions but strategically trades worst-case performance for reduced DRAM usage. Of these 9 bytes, the first 2 bytes serve as the sharding key for the indexer, leaving a 7-byte memory footprint for key storage. The remaining 8.4 bytes consist of a 6-byte SSD position offset and additional data structure overhead, which is amortized across all keys. Using just 16 gigabytes of DRAM, the in-memory indexer can index more than 1 billion entries, making it suitable for a wide range of applications. We chose the B-tree map as the basis for the underlying structure of QMDB’s default indexer to take advantage of B-tree’s high cache locality, low memory overhead, support for ordered key iteration, and graceful handling of key collisions. We use fine-grained reader-writer locks (determined by the first two bytes of the key hash) to minimize contention when updating entries. 3.3CRUD interface QMDB exposes a CRUD (Create, Read, Update, Delete) interface, and in this section we provide a high-level overview of how each operation is implemented. In all examples, we present the operation of the system when using the default in-memory indexer; other indexers may require more reads or writes to serve the same workload. For each operation, we present an intuitive explanation followed by a more formal description along with a description of the SSD I/O required to synchronously handle the request. All writes in QMDB are buffered in twigs (DRAM) and persisted to SSD in batches, so each SSD write is amortized across 2048 entries; to precisely express the cost of each operation, we refer to a entry write as 12048 of a single batched flush to SSD. For brevity, we omit the Id, Version, and Value fields when describing new entries (see Table 1 ), so an entry E is defined as: E=(Key,NextKey,OldId,OldNextKeyId) Read begins by querying the indexer for the file offset of the entry corresponding to a given key; this file offset is used to read the entry in a single SSD IO. Update first reads the most current entry for the updated key, then appends a new entry to the fresh twig. More formally, if E is the most current entry, the new entry E′ appended to the fresh twig derives its OldId and OldNextKeyId from E as follows: E′=(K,E.nextKey,E.Id,E.OldNextKeyId) Updating a key in QMDB incurs 1 SSD read and 1 entry write. Create intuitively involves appending one new entry and updating one existing entry; the existing entry whose Key and NextKey define a range that coincides with the created key must be updated with a new NextKey. This begins by first reading the entry Ep corresponding to the lexicographic predecessor (prevKey) to the created key K. Note that Ep must fulfill the condition Ep.Key EK=(K,Ep.nextKey,Ep.Id,En.Id) Ep′=(prevKey,K,Ep.Id,En.OldId) The ActiveBit of Ep is set to false (in memory), and the indexer is updated so that prevKey points to the file offset of Ep′ and K points to the file offset of EK. Creating a key in QMDB incurs 1 SSD read and 2 entry writes. Delete is implemented by first setting the activeBit to false for the most current entry corresponding to K, then updating the entry for prevKey. First, the entries EK and Ep corresponding to the keys K and prevKey are read from SSD, and the ActiveBits for the twig containing EK is updated. Next, a new entry for PrevKey is appended to the fresh twig: Ep′=(prevKey,EK.nextKey,Ep.Id,EK.OldNextKeyId) Deleting a key in QMDB incurs 2 SSD reads and 1 entry write. 3.4Proofs The remainder of this section describes how each field of the QMDB entry enables the generation of various state proofs. For illustrative purposes, we present proofs of the state corresponding to a key K and the most current Merkle root R, and denote fields of an entry E as E.fieldName. All proofs are Merkle proofs and as a result can be statelessly verified. Inclusion is proved by presenting the Merkle proof π for entry E such that E.Key=K; this entry E can be obtained after querying the corresponding file offset from the indexer. Exclusion is proved by presenting the inclusion proof of E such that E.Key Historical inclusion and exclusion at block height H can be proven for a key K by providing the inclusion proof of an entry such that K is represented by this entry at the given version (block height). QMDB uses OldId and OldNextKeyId to form a graph that enables the tracing of keys over time and space despite updates, deletions, and insertions. OldId links the current entry to the last inactive entry with the same key and OldNextKeyId links to the entry previously referenced by NextKey (when the entry for NextKey is deleted). When proving historical inclusion or exclusion, QMDB traverses the OldId pointer to move backwards in “time”, and the NextKey and OldNextKeyId pointers to move to different parts of the key space at a given block height. Reconstruction of historical state The graph structure defined by OldId and OldNextKeyId can also be used to reconstruct the Merkle tree and the world state at any block height. The Version field tracks the block height and the transaction index where the entry was created, allowing the precise reconstruction of historical states at specific block heights. 3.5Parallelization Figure 2:QMDB prefetches data (prefetcher), performs the state transition (updater), then commits the updated state to the Merkle tree and persistent storage (committer). State updates are parallelized in QMDB through sharding and pipelining. QMDB shards its key space into contiguous spans using the most significant bits—for example, the first 4 bits can create 16 shards—with boundary nodes to define logical boundaries that prevent state modifications from crossing shard boundaries (i.e., PrevKey and NextKey will always fall within the same shard). This sharding enables QMDB to better saturate underlying hardware resources and scale to bigger or multiple physical servers. In addition, QMDB implements a three-stage pipeline (Prefetch-Update-Flush) to allow the transaction processing layer to better saturate QMDB itself. For applications with relaxed synchronicity for state updates, QMDB is able to interleave computation across overlapping blocks. This cross-block and intrablock parallelism allows QMDB to more fully saturate available CPU cycles and SSD IOPS. Clients interact with QMDB by enqueueing key-value CRUD requests; updates are requested by writing the old Entry and new Value into the EntryCache directly, while deletions and insertions only require the key and new entry respectively. The pipeline is illustrated in Figure 2 , and is managed by three workers: the fetcher, the updater, and the committer. Each stage is shown in rectangles with solid lines, and the workers communicate via producer-consumer task queues in shared memory. The fetcher reads relevant entries from SSD into the EntryCache in DRAM when necessary (Deletion and Insertion), while the updater appends new entries and updates the indexer. Once the fetcher and updater finish processing a block, the committer asynchronously Merkleizes the updates and flushes the full twigs to persistent storage. The QMDB pipeline has N+1 serializability, which guarantees that state updates are visible in the next block. This is implemented by enforcing that the prefetcher cannot run for block N until the updater finishes processing block N−1. 4Evaluation In this section, we present a preliminary evaluation of the performance of QMDB and compare it to RocksDB and NOMT. On a comparable workload and evaluation setup, QMDB achieves 6× higher updates per second than RocksDB and 8× higher updates per second than NOMT. When measuring the performance of QMDB, we generate 100,000 transactions per block–each creating 10 entries–and run the workload for 7000 blocks to create a total of 7 billion entries. Periodically (every billion entries) we test the throughput and latency of reads, updates, deletions, and creations, and after all 7 billion entries are populated we measure transactions per second (TPS) using transactions consisting of 9 writes, 15 reads, 1 create, and 1 delete. 4.16X more updates/s than key-value DBs Figure 3 shows the throughput of QMDB compared to RocksDB (storing the application-level key-values with no Merkleization), demonstrating that QMDB delivers 6× more updates per second than RocksDB. This speedup is in fact an underestimate of QMDB’s advantage over RocksDB-based systems, given that all benchmarks compare QMDB with Merkleization to RocksDB without Merkleization. We believe the primary factor driving this speedup to be QMDB trading off functionality unnecessary for blockchain workloads for extra throughput. Examples of features and characteristics of RocksDB that are not required in blockchain workloads include efficient range/prefix queries and spatial locality of key-value pairs. We caveat that our RocksDB evaluation is preliminary and could be better optimized, as our results were gathered on an unsharded RocksDB instance with default parameters. We also tested RocksDB with the parameters recommended by the RocksDB wiki [ 9 ] with direct I/O enabled for reads and compaction, but did not observe noticeably better performance. We have also informally tested with MDBX but do not show those results here, as MDBX was significantly slower than RocksDB. Figure 3:QMDB shows a6×increase in throughput over RocksDB.QMDB is able to do 601K updates/sec with 6 billion entries and demonstrates superior performance across all operation types. These results were obtained on an AWS c7gd.metal instance with 2 SSDs and 64 vCPUs. 4.2Up to 8X throughput vs state-of-the-art For a more apples-to-apples comparison with a verifiable database that also performs Merkleization, we compared QMDB to NOMT [ 10 ]. NOMT performs Merkleization and stores Merkleized state directly on SSD, and can be directly compared to QMDB in terms of functionality. Both QMDB and NOMT aim to be drop-in replacements for general-purpose key-value stores like RocksDB, and aim to leverage the performance of NVMe SSDs. At the time of writing, both QMDB and NOMT are pre-release with significant optimizations in the pipeline for both systems, making a definitive comparison impossible at this point. We used the version of NOMT from November 2024. The steps we took to present a fair comparison include: evaluating QMDB and NOMT using their respective benchmark utilities, verifying the NOMT parameters with the authors [ 1 ], using the same hardware when evaluating each system, and normalizing the performance results against the workload. Unfortunately, we were unable to eliminate all variability, as NOMT does not support client-level pipelining and the evaluated version of QMDB did not support direct IO or io_uring (results for a newer version of QMDB with io_uring and direct IO are shown in § 4.3 ). Table 3 shows the results of our evaluation, demonstrating a 8× speedup in normalized updates per second (transaction count multiplied by state updates per transaction). NOMT’s default workload is a 2-read-2-write transaction, whereas QMDB is evaluated with a 9-write-15-read-1-create-1-delete transaction (based on our own analysis of the operation composition of historical Ethereum transactions; data available upon request). By normalizing the results based on the workload, we provide what we believe to be a fair representation of the comparative performance of these two systems. The read latency was comparable (30.7μs for QMDB and 55.9μs for NOMT) and close to the i3en.metal SSD read latency, which is in line with our expectations for both systems. We believe this performance gap to be primarily driven by SSD write amplification, given that NOMT buffers in-place updates in a write-ahead log whereas QMDB’s entries are immutable by design. This results in persistent storage writes for potentially every state update and Merkleization for NOMT, compared to QMDB where a SSD write is only required every 2048 updates and zero SSD accesses are required for Merkleization. We note that QMDB’s performance relies on its indexer, which incurs some DRAM overhead. Compared to NOMT’s overhead of 1–2 bytes per entry, QMDB incurs an additional 14 bytes per entry with its in-memory indexer and an additional 1–2 bytes per entry with its hybrid indexer). We consider this to be a reasonable trade-off given the 8× increase in throughput, with QMDB’s hybrid indexer still offering a speedup for DRAM-constrained setups. Table 3:QMDB is up to 8× faster than NOMT.Results are normalized by multiplying the transactions per second by the number of state updates per second. 4.3Reaching 2M updates per second We show preliminary results indicating that QMDB can double its throughput and reach 2 million updates per second by incorporating asynchronous I/O (io_uring) and direct I/O (O_DIRECT), improving CPU efficiency and eliminating VFS-related overhead respectively. Continuous advancements in SSD performance have resulted in modern consumer-grade SSDs (e.g., Crucial T705, Samsung 980 [ 11 ]) being able to reach over 1 million IOPS with only one drive. These high-IOPS SSDs are not yet available on AWS, so we approximate the performance in our preliminary experiments by using RAID0. After populating QMDB with 14 billion entries, we measured 2.28 million updates/second on i8g.metal-24xl (6 SSDs) and 697 thousand updates/second on i8g.8xlarge (2 SSDs), which are promising early results. 2.28 million updates is sufficient to support over one million native token transfers per second (each transfer requiring two state updates). QMDB’s CPU utilization averages 77% on the 32-core AWS i8g.8xlarge instance and 58% on the 96-core AWS i8g.metal-24xl instance, indicating that with faster SSDs the bottleneck is no longer SSD IO but rather CPU and synchronization overheads. We also evaluated NOMT with a lower capacity of 1 billion entries on the same instances (i8g.metal-24xl and i8g.8xlarge), and observed a maximum of 60,831 updates/second. We acknowledge that comparing these numbers would not be fair given that NOMT is focused on supporting single-drive deployments, and RAID0 has different performance characteristics than a single SSD. We plan a more comprehensive evaluation with a single high-performance SSD once we are able to secure a testbed with the necessary hardware. 4.4Scaling up and down QMDB scales up to huge datasets and down to ultra-low minimum system requirements, enabling it to meet the needs of blockchains with the highest (performance-oriented) and lowest (most decentralized) node requirements. Scaling up to hundreds of billions of entries. The hybrid indexer trades off SSD capacity and system throughput to reduce the DRAM footprint of the QMDB indexing layer to just 2.3 bytes per entry, allowing servers with a high ratio of SSD capacity to DRAM capacity to scale to huge world states. Table LABEL:table:eval:aws-datasize shows the maximum theoretical number of entries that can be stored in QMDB running on various different AWS instances. We calculate that the i3en.metal instance with high SSD capacity and a reasonable amount of DRAM could scale to 280 billion entries, far exceeding the needs of any existing production blockchain. Due to the prohibitive amount of time necessary to populate hundreds or even tens of billions of keys, we only run experiments up to 15 billion entries and conservatively extrapolate the results. The average DRAM overhead actually drops as more entries are inserted; 1 billion entries cost about 3 bytes of DRAM per entry, which drops to just 2.2 bytes per entry for 15 billion entries, indicating that the marginal DRAM overhead per additional entry is close to constant. Table 4:QMDB can scale to hundreds of billions of entries.The hybrid indexer uses only 2–3 bytes of DRAM per entry. *This table shows extrapolated theoretical world state sizes for different hardware configurations, and compares the maximum entries stored using the in-memory indexer vs the hybrid indexer. Scaling down to consumer-grade budget servers. We built a low-cost Mini PC (parts totaling about US$540 as of November 2024) to test QMDB under resource-constrained conditions. The system featured an AMD R7-5825U (8C/16T) processor, 64 GiB DDR4 DRAM, and a TiPro7100 4 TB NVMe SSD rated at approximately 330K IOPS. Despite these modest specs, QMDB achieved tens of thousands of operations per second with billions of entries. Using the in-memory indexer configuration, we were able to achieve 150,000 updates per second up to 1 billion entries, and stayed above 100,000 updates per second as we inserted up to 4 billion entries. With the hybrid indexer, QMDB maintained 63,000 updates per second storing 15 billion entries. These results highlight QMDB’s ability to operate on commodity hardware, improving decentralization by lowering the capital requirements and infrastructural barriers blockchain participation. 5Discussion Spatial locality is reduced in QMDB compared to general-purpose key-value stores such as RocksDB. It is true that QMDB does not preserve temporal locality, given that keys that were originally inserted at similar times can become separated in QMDB if they are later updated. However, this is not a disadvantage for blockchain workloads, given that blockchain infrastructure must assume worst-case workload characteristics to avoid exposing the blockchain to denial-of-service attacks in a Byzantine fault model. This is unlike traditional computing workloads which can rely on locality for average-case performance. In fact, most blockchains implement measures to uniformly distribute keys across storage with some exceptions (e.g., arrays in EVM); this already reduces or eliminates spatial locality. Provable historical state enables new applications such as a TWAP (Time-Weighted Average Price) aggregation at the tip of the blockchain with arbitrary time granularity. Peer-to-peer syncing of state can be easily and efficiently implemented by sharing state at the twig granularity. A downloaded twig accompanied by the inclusion proof of this twig against the global Merkle root can be inserted into the state tree independent of other twigs. Memory-efficient indexers are useful for heavily resource-constrained use cases or for decentralization of blockchains with tens of billions of keys. We implemented a memory-efficient SSD-optimized hybrid indexer that uses only 2.3 bytes per key but requires one additional SSD read per lookup. The hybrid indexer stores key-to-file offset mappings in immutable SSD-resident log-structured files and implements an overlay layer to manage entries in the SSD that have gone stale due to updates. In addition, the hybrid indexer uses a DRAM cache of the spatial and temporal locality of the application workload. State bloat is one of the many problems plaguing modern blockchains–as blockchains see growth in widespread adoption, world state is continuously growing to the point that it limits the ability of non-professional users to adequately run the validator software. QMDB achieves a memory footprint that is an order of magnitude smaller than existing verifiable databases, and using the hybrid indexer can further reduce the memory footprint and decrease barriers to validator participation. Recovery after failures (crash, blockchain reorganization) is done via replaying up to the last checkpoint and then trimming inactive entries. The reference QMDB implementation intentionally omits specific reorg optimizations and leaves it up to individual blockchains, given the variation in consensus protocols between different chains. QMDB can be extended to support quick switches with an undo log, but in general QMDB expects blockchains to build a buffering layer on top of QMDB and only write finalized data (which is a similar approach to other verifiable databases). Trusted Execution Environments (TEEs) offer several security advantages to blockchains, and to the best of our knowledge QMDB is the first TEE-ready verifiable database. Running a blockchain full node in a TEE (e.g., Intel SGX) protects the validator’s private key from leaking, provides a secure endorsement that the state root was generated by a particular binary, guarantees peers that the validator is non-byzantine, and prevents censorship. Current TEEs protect the integrity of CPU and DRAM, but cannot fully isolate persistent storage resources; QMDB protects its persistently stored data via AES-GCM [ 7 ] encryption using keys dynamically derived from the virtual file offset to protect against copy attacks. Zero-knowledge (ZK) proof generation for state transitions is increasingly seen as a crucial part of future blockchains, with one barrier to adoption being the long proof generation time. The generation of ZK proofs can be parallelized per state commitment [ 20 ] (e.g., each block can be proven individually and then chained together); thus, the degree of parallelization depends on the frequency of state root generation. QMDB’s high performance in-memory Merkleization is capable of computing a new state root per-transaction if desired, enabling the maximum degree of parallelism for ZK proof generation. 6Conclusion QMDB represents a significant leap in blockchain state databases, providing an order of magnitude improvement in throughput over state-of-the-art systems in datasets 10× larger than Ethereum at the time of writing. Organizing and compressing state updates into append-only twigs, QMDB is able to update and Merkleize world state with minimal write amplification, improving performance and reducing cost through efficient utilization of SSD IOPS. The immutability of full twigs allows state to be compressed by more than 99.9% for Merkleization, making it the first live-state management system capable of performing fully in-memory Merkleization with zero disk IO on a consumer-grade machine. We demonstrate that with these architectural innovations, QMDB can achieve up to 2 million updates per second and scale to world states of 15 billion keys. QMDB achieves lower minimum hardware requirements for all throughput benchmarks and world state sizes, democratizing blockchain networks by enabling affordable home-grade setups (US$540) to participate in large blockchains. At the same time, it provides substantial cost savings for large-scale operators due to its flash-heavy design that eliminates the need for large amounts of expensive and power-hungry DRAM. QMDB implements many new features not present in other ADSes, such as historical state proofs, opening opportunities for a new class of applications not yet seen on the blockchain. These features, together with order-of-magnitude advancements in performance and efficiency, establish QMDB as a breakthrough in scalable and verifiable databases. 7Acknowledgments We gratefully acknowledge the invaluable feedback and assistance provided by the many individuals and teams who helped refine our system. In particular, we thank Patrick O’Grady from Commonware for his expertise and guidance throughout the development of QMDB and the writing of this paper. We also extend our gratitude to Yilong Li and Lei Yang from MegaETH, and Ye Zhang from Scroll, for their insightful review of our design and manuscript. Finally, we thank Robert Habermeier and the Thrum team for their support in conducting the NOMT benchmarks. References [1]↑Reproducing benchmark numbers. https://github.com/thrumdev/nomt/issues/611 . [2]↑Amazon Web Services.Amazon Quantum Ledger Database (QLDB), 2019. [3]↑Antonopoulos, P., Kaushik, R., Kodavalla, H., Rosales Aceves, S., Wong, R., Anderson, J., and Szymaszek, J.Sql ledger: Cryptographically verifiable data in azure sql database.In Proceedings of the 2021 international conference on management of data (2021), pp. 2437–2449. [4]↑Ayyalasomayajula, P., and Ramkumar, M.Optimization of merkle tree structures: A focus on subtree implementation.In 2023 International Conference on Cyber-Enabled Distributed Computing and Knowledge Discovery (CyberC) (2023), IEEE, pp. 59–67. [5]↑Dahlberg, R., Pulls, T., and Peeters, R.Efficient sparse merkle trees: Caching strategies and secure (non-) membership proofs.In Secure IT Systems: 21st Nordic Conference, NordSec 2016, Oulu, Finland, November 2-4, 2016. Proceedings 21 (2016), Springer, pp. 199–215. [6]↑Deng, Y., Yan, M., and Tang, B.Accelerating merkle patricia trie with gpu.Proceedings of the VLDB Endowment 17, 8 (2024), 1856–1869. [7]↑Dworkin, M.Recommendation for Block Cipher Modes of Operation: Galois/Counter Mode (GCM) and GMAC.Special Publication 800-38D, NIST, 2007. [8]↑El-Hindi, M., Ziegler, T., and Binnig, C.Towards merkle trees for high-performance data systems.In Proceedings of the 1st Workshop on Verifiable Database Systems (2023), pp. 28–33. [9]↑Facebook.Setup options and basic tuning - rocksdb wiki. https://github.com/facebook/rocksdb/wiki/Setup-Options-and-Basic-Tuning , 2024.Accessed: 2024-12-21. [10]↑Habermeier, R.Introducing nomt.Blog post, May 2024. [11]↑Habermeier, R.Nomt: Scaling blockchains with a high-throughput state database.Presented at sub0 reset 2024, November 2024. [12]↑Jeon, K., Lee, J., Kim, B., and Kim, J. J.Hardware accelerated reusable merkle tree generation for bitcoin blockchain headers.IEEE Computer Architecture Letters (2023). [13]↑Li, C., Beillahi, S. M., Yang, G., Wu, M., Xu, W., and Long, F.Lvmt: An efficient authenticated storage for blockchain.ACM Transactions on Storage 20, 3 (2024), 1–34. [14]↑Liang, J., Chen, W., Hong, Z., Zhu, H., Qiu, W., and Zheng, Z.Moltdb: Accelerating blockchain via ancient state segregation.IEEE Transactions on Parallel and Distributed Systems (2024). [15]↑Lim, H., Fan, B., Andersen, D. G., and Kaminsky, M.Silt: a memory-efficient, high-performance key-value store.In Proceedings of the Twenty-Third ACM Symposium on Operating Systems Principles (New York, NY, USA, 2011), SOSP ’11, Association for Computing Machinery, p. 1–13. [16]↑Lim, H., Han, D., Andersen, D. G., and Kaminsky, M.Mica: a holistic approach to fast in-memory key-value storage.In Proceedings of the 11th USENIX Conference on Networked Systems Design and Implementation (USA, 2014), NSDI’14, USENIX Association, p. 429–444. [17]↑Paradigm.Reth: A modular and high-performance ethereum execution layer client. https://github.com/paradigmxyz/reth , 2022.Accessed: 2024-11-25. [18]↑Protocol, H.Merkle mountain ranges: Historical block hash accumulator. https://docs.herodotus.dev/herodotus-docs/protocol-design/historical-block-hash-accumulator/merkle-mountain-ranges .Accessed: 2024-11-18. [19]↑Raju, P., Ponnapalli, S., Kaminsky, E., Oved, G., Keener, Z., Chidambaram, V., and Abraham, I.{mLSM}: Making authenticated storage faster in ethereum.In 10th USENIX Workshop on Hot Topics in Storage and File Systems (HotStorage 18) (2018). [20]↑Roy, U.Introducing SP1: A performant, 100% open-source, contributor-friendly zkVM, 2024.Retrieved on December 20, 2024. [21]↑Sinha, R., and Christodorescu, M.Veritasdb: High throughput key-value store with integrity.Cryptology ePrint Archive (2018). [22]↑Todd, P.Merkle mountain ranges. https://github.com/opentimestamps/opentimestamps-server/blob/master/doc/merkle-mountain-range.md , 2016.Accessed: 2024-11-18. [23]↑Wood, G.Ethereum: A secure decentralized generalized transaction ledger.In Ethereum Yellow Paper (2014). [24]↑Yang, X., Zhang, Y., Wang, S., Yu, B., Li, F., Li, Y., and Yan, W.Ledgerdb: A centralized ledger database for universal audit and verification.Proceedings of the VLDB Endowment 13, 12 (2020), 3138–3151. [25]↑Yue, C., Dinh, T. T. A., Xie, Z., Zhang, M., Chen, G., Ooi, B. C., and Xiao, X.Glassdb: An efficient verifiable ledger database system through transparency.arXiv preprint arXiv:2207.00944 (2022).
Injective , a lightning-fast L1 blockchain built specifically for the financial industry, and io.net , a decentralized distributed compute network, have announced a partnership to explore the integration of Injective’s iAgent AI agent framework with io.net’s decentralized GPU compute network. Injective is a layer one blockchain that is incredibly fast and interoperable, making it ideal for creating cutting-edge Web3 finance applications. To help developers create unparalleled dApps, Injective offers robust plug-and-play modules. The native asset that drives Injective and its quickly expanding ecosystem is INJ. Prominent investors Jump Crypto, Pantera, and Mark Cuban support Injective, which is incubated by Binance. Injective’s first AI agent-based SDK, iAgent, was designed to make it possible to combine blockchain operations with AI-powered features. Through AI-driven commands, iAgent enables users to transmit payments, carry out on-chain transactions, and carry out other blockchain operations using large language models (LLMs) like OpenAI. On November 19, the SDK was launched to the public. The partnership will use io.net’s distributed infrastructure, which consists of more than 10,000 cluster-ready GPUs and CPUs, to further the nexus between artificial intelligence and decentralized computing. Through the use of decentralized resources, this integration aims to provide AI practitioners the means to train, optimize, and implement machine learning models. As part of the partnership, the two organizations will explore how io.net’s decentralized GPU network and Injective’s iAgent framework can enhance computational capabilities. They will also look into how Injective might use io.net’s GPU pricing and data feeds in future on-chain financial products. Tausif Ahmed, Chief Business Development Officer of io.net stated: “This collaboration reflects our shared goal of creating practical solutions for developers and engineers. By combining io.net’s decentralized compute infrastructure with Injective’s AI agent frameworks and tools, we aim to address key challenges in the AI and blockchain space by lowering barriers to entry for builders everywhere” Eric Chen, CEO & Co-Founder of Injective Labs stated: “AI using blockchain rails has exploded in recent months and we’re thrilled to see increasing adoption of iAgent bringing AI on-chain. Now having io.net’s support with their decentralized compute platform to serve on-chain AI developer’s needs, can further expand the use cases and innovation in the burgeoning DeFAI sector.” Decentralized GPU clusters from geo-distributed sources are deployed and managed by io.net, a decentralized physical infrastructure network (DePIN). The IO Network, which is particularly designed for low latency, high processing demand use cases like cloud gaming and AI/ML operations, has hundreds of thousands of GPUs available now. io.net lowers prices, speeds up lead times, and gives engineers and companies more options while democratizing access to GPU processing resources. One may become a capacity provider at io.net or get compute capacity for a fraction of the price.
io.net and Injective are joining forces to power the future of decentralized finance and artificial intelligence on the Injective network. Decentralized physical infrastructure network io.net ( IO ) will help bring this into reality with the expansion of its decentralized GPU compute network to Injective ( INJ ). The Injective team announced the development via a blog post on Jan.14. According to the platform, the integration is live and io.net now supports DeFAI developers on the INJ network. The DePIN market currently stands around $32 billion, with io.net one of the leading projects in the sector by market capitalization. Top projects in the sector include Render, Filecoin, Theta Network and The Graph. Per crypto.news data , the io.net market cap is $393 million as of Jan 14, 2025. Meanwhile, the AI Agents and AI market caps are at $13 billion and $44 billion respectively. Binance incubated Injective, backed by leading venture capital firms like Jump Crypto and Pantera Capital, has a market cap of $2.03 billion. Injective is the blockchain for DeFi, real-world assets and AI. By advancing decentralized computing and AI on the blockchain, io.net and Injective are opening up the web3 space for builders, with available tools tapping into Injective’s iAgent framework and o.net’s decentralized GPU network. In December 2024, Injective and Aethir announced a major collaboration that introduced tokenized GPU compute resource allocation. The initiative includes the conversion of GPU resources into tokens that are tradeable on Injective. It means developers, researchers and businesses can access computational resources at more flexible and cost-effective prices within the AI ecosystem.
According to official news, the decentralized AI data execution layer Alpha Network incubated by KEKKAI Labs announced a strategic partnership with the decentralized GPU cluster network io.net. This cooperation combines Alpha Network's privacy technology with io.net's decentralized GPU network, aiming to jointly promote data security, popularization of AI infrastructure and industry innovation. Both parties will explore creating a compliant and privacy-first AI training environment through the io.net decentralized GPU cluster. With the help of io.net's distributed GPU infrastructure, Alpha Network can securely process sensitive training data without relying on traditional trusted environments.
DePIN project io.net has officially partnered with Alpha Network to boost data security and access for AI-based Web3 applications. This partnership will potentially provide a secure and private environment for AI applications, helping developers build and deploy powerful decentralized applications (dApps). io.net Commits to Next-Gen AI Developments Alpha Network is the world’s first decentralized data execution layer for AI. It offers private data storage and AI training data for Web3 developers. The partnership will combine Alpha Network’s data privacy technology with io.net’s decentralized GPU infrastructure. According to a press release shared with BeInCrypto, the plan is to create a secure, high-performance environment for AI and Web3 applications. io.net will use its decentralized GPU clusters to support AI training in a privacy-preserving environment, unlike traditional centralized setups. In simpler terms, Alpha Network will now be able to handle and process sensitive data needed for AI training in a way that’s safe and private. The network will no longer need to depend on traditional systems that require users to “trust” centralized providers like big data centers or cloud services. Instead, by using io.net’s decentralized GPU network, Alpha Network can securely work with this data across a distributed network of computers. This decentralized approach eliminates the need for a “sandbox” (a controlled and isolated testing environment) while still ensuring the data remains private and protected. Moreover, Alpha Network’s Zero-Knowledge (ZK) technology guarantees that data is kept private and secure. “Our collaboration with Alpha Network will significantly expand access to decentralized, privacy-compliant AI compute for Web3 builders,” Tausif Ahmed, Chief Business Development Officer at io.net, commented. The collaboration also aims to support the development of decentralized applications with high-quality datasets. It offers a foundation for developers to build more scalable AI solutions. “This partnership will break new ground in secure AI and Web3 compute, allowing users to access cutting-edge AI infrastructure while ensuring privacy and security,” Lina Zhang, CEO of Alpha Network, added. AI is the New Frontier for DePIN Projects io.net said the partnership will also boost Alpha Network’s data sharding and model generation solutions, which optimize the training of AI models on large datasets. These improvements will allow developers and businesses to train AI models more efficiently. Despite recent developments, the DePIN project’s IO token has seen notable volatility in the market. At press time, the token was trading at $3, down about 6% over the past 24 hours. io.net price chart. Source: BeInCrypto Nevertheless, io.net has made several deals over the last year to boost AI development. For instance, in December, io.net partnered with autonomous AI agent Zerebro. The agent will use io.net’s decentralized compute resources to enhance its Ethereum validator. Similarly, in September 2024, the DePIN project made an agreement with the TARS protocol to reduce AI model training costs by up to 30%. Overall, DePIN projects have increasingly ventured into AI throughout 2024. Recent research showed that DePIN projects saw a 100x growth in revenue last year. The majority of this growth was driven by AI initiatives.
The partnership will empower developers to create robust decentralized applications. The collaboration will create a privacy-preserving AI training environment by using io.net’s decentralized GPU clusters. Decentralized physical infrastructure network for GPU clusters io.net and Alpha Network have partnered to provide a secure environment for web3 and AI apps. By addressing data security issues and democratizing access to AI infrastructure, the partnership will empower developers to create robust decentralized applications. Alpha Network, the first decentralized AI execution layer in the world, offers web3 builders private data storage and AI training data. Through combining Alpha Network’s superior data privacy and breach prevention technology with io.net’s decentralized GPU network, the partners hope to build a safe, compliant, and high-performance environment for AI and web3 apps. The collaboration will create a privacy-preserving AI training environment by using io.net’s decentralized GPU clusters. Alpha Network will be able to handle critical training data safely as a result, eliminating the need for conventional trusted enviroment. Rather, it can make use of io.net’s GPU infrastructure in a decentralized environment without a sandbox, and Alpha Networks’ use of ZK technology ensures data security and secrecy. Tausif Ahmed, Chief Business Development Officer at io.net, stated: “Our collaboration with Alpha Network will significantly expand access to decentralized, privacy-compliant AI compute for web3 builders. Combining Alpha Network’s cutting-edge data privacy solutions with io.net’s high-performance decentralized GPU capabilities will create an environment for web3 innovation to flourish.” Lina Zhang, CEO of Alpha Network, added: “Through partnering with io.net, we are expanding the boundaries of what can be achieved in the field of secure AI and web3 compute. This will enable users to access state-of-the-art AI infrastructure without sacrificing privacy or security and support the creation of novel decentralized applications fueled by high quality datasets.” The goal of Alpha Network, the first decentralized data execution layer for AI in the world, is to streamline the execution of data applications while maintaining the greatest levels of security and compliance. AlphaOS, its flagship product, is an AI-powered cross-platform operating system that is Web3 native. AlphaOS provides a smooth and safe Web3 environment by enabling users to manage transactions, access Web3 insights, engage in data mining, and earn rewards through natural language instructions. Alpha Network’s data sharding and quantized model generation solutions will also be supported by the collaboration between io.net and Alpha Network, increasing the effectiveness of AI model training on large datasets for customers. In addition to protecting data privacy, this approach makes training for businesses, developers, and people safe, scalable, and affordable. The partnership represents a significant advancement in the creation of a safe and usable infrastructure for web3 and AI applications. As a consequence, a decentralized, privacy-first architecture will open up new possibilities for developers, companies, and GPU owners.
XRP price printed a bull flag on the daily chart, a technical chart pattern associated with strong upward momentum. Could this bullish setup and surging open interest signal the start of the second leg of XRP’s rally into the double-digit zone? Increasing IO boosts XRP price XRP ( XRP ) price is up 15% over the last seven days after weeks of consolidation following the altcoin’s rally toward $3.00 in early December. The XRP/USD pair is up 1.5% to its intraday high of $2.44 on Jan. 6, according to data from Cointelegraph Markets Pro and TradingView . XRP’s potential to rise higher is backed by increasing open interest (OI), which has increased significantly over the last 24 hours. The chart below shows that XRP OI has increased by 45% over the last 24 hours from $2.6 billion to $3.7 billion, suggesting that investors are opening positions with the expectation of XRP’s price increase. This also indicates that more trading activity and money entering the XRP market. XRP open interest. Source: CoinGlass Historically, significant jumps in OI interest have preceded dramatic rallies in XRP price. For example, this metric jumped by over 100% between July 13 and July 14, 2023, triggering a 107% price jump over the same period. This price action came after Judge Analisa Torres ruled that the XRP token was not a security in the Securities and Exchange Commission lawsuit against Ripple. Similar price action was witnessed when OI jumped by 76% between Nov. 29 and Dec. 3, 2024, accompanying another 100% jump in price over the same period. If history repeats itself, the latest surge in OI could see the XRP price breakout of consolidation, recording massive gains toward $15. XRP price “bull flag” targets $15 The XRP/USD pair is expected to resume its prevailing bullish momentum despite the pullback from recent highs, as the chart shows a classic technical pattern in the making. XRP’s price action between Nov. 5, 2024, and Jan. 6, 2025, has led to the formation of a bull flag pattern on the daily chart, as shown in the figure below. A daily candlestick close above the flag’s upper boundary at $2.41 would signal the start of a massive upward breakout. The target is set by the flagpole’s height, which comes to be around $15, an approximately 520% uptick from the current price. XRP/USD daily chart featuring bull flag pattern. Source: Cointelegraph/ TradingView Other bullish indicators that can be observed on the chart are the immediate support provided by the 50-day simple moving average at $2.10 and the relative strength index resetting just above the 50 mark. Several analysts have also predicted a $15 XRP price target by 2025, citing market sentiment toward XRP’s adoption and partnership growth, fueled by a crypto-friendly Trump administration . Using Fibonacci levels and Elliott Wave theory , popular crypto analyst Egrag Crypto shared an optimistic prediction, saying that XRP price can hit $15 by May 5, 2025. This article does not contain investment advice or recommendations. Every investment and trading move involves risk, and readers should conduct their own research when making a decision.
O.XYZ has raised $130 million to create the first-ever Decentralized AI Managed Organization (DeAIO). This initiative introduces a community-driven AI system designed to operate independently of corporate influence and remain resistant to shutdowns. The funding round was led by the founder of IO.NET, a decentralized computing network. New Developments in Decentralized AI Innovation According to the exclusive press release shared with BeInCrypto, DeAIO offers a new framework for AI governance, placing an “AI CEO” at the center of its operations. This potential unbiased AI entity will manage decision-making, streamline development, and coordinate contributors from across the network. Community members and stakeholders have the ability to veto the AI CEO’s decisions, ensuring alignment with collective goals. “In a future where Super AI exists it should belong to the people to empower them—not to corporations that want to control them. By building a decentralized AI system, we’re ensuring this transformative technology works for humanity, not shareholder profits,” Ahmad Shadid, the founder of IO.NET, told BeInCrypto. The launch of O.XYZ comes as global debates around AI regulation and censorship intensify. These measures could hinder innovation and restrict public access to advanced technologies. O.XYZ’s decentralized AI infrastructure is built on terrestrial (ATLAS), orbital (ORBIT), and maritime (PACIFIC) nodes. This framework will potentially protect the system from control by any single authority or government. AI Agents Among the Top Crypto Trends for 2025 The crypto industry saw significant AI advancements in 2024, with automated trading and asset management tools gaining popularity. Platforms like Coinbase and Replit provided developers with the resources to create bots, while tools like Near’s AI Assistant simplified decision-making for traders. AI infrastructure developments also progressed. Decentralized autonomous chatbots (DACs), introduced by Dan Boneh and his team at a16z crypto, demonstrated the potential for greater AI autonomy. Looking ahead, OpenAI CEO Sam Altman predicted that AI agents could enter the workforce as early as 2025. AI agents will likely transform how businesses operate by automating tasks traditionally performed by humans. However, skepticism remains. A recent survey of Solana founders showed that many believe AI agents are overhyped. Nevertheless, AI developments are undoubtedly the next biggest trend in the crypto industry, and investors are reflecting that notion.
Phala and io.net create Strategic Partnership to Enhance GPU-TEE GPU TEE Partnerships 2024-09-26 We are thrilled to announce a new partnership between Phala Network and io.net, aimed at advancing secure computation and decentralized AI. Since launching its mainnet in 2021, Phala Network has built a robust infrastructure of over 30,000 TEE CPU nodes, enabling web3 developers to offload complex computations from smart contracts to Phala’s secure, off-chain network. These nodes play a pivotal role in maintaining data privacy and security while delivering verifiable proofs and oracles, supporting a wide range of web3 applications from social apps to AI-driven agents. With Phala’s recent benchmark for TEE-enabled GPUs announcement, it’s continuing to mark a new era in secure, high-performance decentralized AI. This benchmark evaluates the performance of Nvidia’s H100 and H200 GPUs when integrated with Phala’s TEE technology, offering the computational power required for training and running large AI models like LLaMA 3 and Microsoft Phi, while upholding the highest security and privacy standards. Read the latest GPU-TEE benchmark release on cointelegraph here . In partnership with io.net, Phala is taking its vision of decentralized AI to the next level by accessing GPU hardware via io.net’s cloud network, leveraging Nvidia’s H100 and H200 GPUs. By extending Trusted Execution Environments (TEEs) to include AI accelerators like GPUs, this collaboration ensures that sensitive AI workloads are securely processed, with advanced cryptographic protections. Nvidia’s H100 Tensor Core GPUs, equipped with confidential computing features such as encrypted memory and secure boot, further enhance this security layer. The IO Cloud enables users to deploy and manage decentralized GPU clusters on demand, offering access to powerful GPU resources without the need for costly hardware investments or complex infrastructure management. By delivering a cloud-like experience, IO Cloud democratizes GPU access for ML engineers and developers, making advanced computing power more accessible and affordable by offering up to 90% savings compared to traditional cloud services. With seamless integration through the IO SDK, users can easily tap into globally distributed GPU resources, acting as a CDN for machine learning and bringing computation closer to end users. Built on the proven RAY framework used by OpenAI, it enables simple scaling of Python applications. Additionally, users can look forward to future access to advanced features like the IO Models Store, serverless inference, cloud gaming, and pixel streaming This partnership with io.net represents a major advancement in Phala’s mission to democratize decentralized AI, delivering secure and transparent systems that make AI technology more accessible to a wider range of users. Together, Phala and io.net will conduct research, testing and benchmarking, starting with cutting-edge NVIDIA H100s and H200s. This collaboration will explore deploying Phala Network’s autonomous AI agents and AI agent contracts on the IO Network as well as integrating Phala Network’s TEE hardware and workers into the IO Network of GPUs. Both parties will explore deeper integrations between the IO Network and Phala Network as we continue to develop our technical roadmaps. About io.net io.net is a decentralized distributed compute network that enables ML engineers to deploy a GPU cluster of any scale within seconds at a fraction of the cost of centralized cloud providers. io.net sources compute resources from multiple locations and deploys them into a single cluster at massive scale. io.net has successfully supported training, fine tuning, and inference for a wide range of ML models. About Phala Phala is a pioneer in confidential computing and secure data processing. They specialize in leveraging Trusted Execution Environments (TEEs) and secure enclaves to enable secure and verifiable computations. Their expertise in confidential computing makes them a valuable partner in advancing the field of verifiable AI computing.
NVIDIA TEE GPU H200 Delivers High Performance for Decentrazlied AI TEE Research GPU Verified Computation Confidential Computation 2024-10-31 In a recent study by Phala Network, io.net, and Engage Stack, the performance impact of enabling TEE on NVIDIA H100 and H200 Hopper GPUs was examined for large language model (LLM) inference tasks: https://arxiv.org/abs/2409.03992 TEEs add a security layer by isolating computations to protect sensitive data, which is essential for high-stakes applications. The findings highlight how TEE mode affects the H100 and H200 GPUs differently, revealing TEE's feasibility for secure, high-performance AI. Key Findings TEE-on mode has a greater impact on Time To First Token (TTFL) and Inter-Token Latency (ITL) in H200 compared to H100. 1 ) Minimal Impact on Core GPU Computation TEE mode introduces only a minor impact on the GPUs' core computations, with the main performance bottleneck stemming from data transfer between the CPU and GPU. The additional encryption over PCIe channels—needed to maintain secure data flow—slightly raises latency, but the impact is contained. Both H100 and H200 GPUs demonstrated that minimizing data movement significantly reduces TEE overhead, helping maintain overall system efficiency. 2 ) Low Overhead for Most LLM Queries For typical LLM tasks, TEE incurs under 7% performance overhead on both GPUs. As sequence length increases, this overhead decreases even further, becoming nearly negligible for extended inputs and outputs. This shows that TEE’s security layer can handle large-scale LLM tasks efficiently on both the H100 and H200, offering dependable security with minimal performance impact across most queries. 3 ) Positive Results Across Different Models and GPUs Larger LLMs, such as Llama-3.1-70B, showed almost no performance penalty from TEE on either GPU, while smaller models like Llama-3.1-8B experienced a slightly higher impact. The H100 consistently outperformed the H200 in terms of overhead reduction, particularly with larger models. For example, Llama-3.1-70B incurred only a 0.13% overhead on the H100, while the H200 had a slightly higher 2.29% overhead. These results suggest that, while both GPUs are well-suited to high-demand applications, the H100 may be preferable for tasks requiring lower latency. 4 ) Minimal Real-Time Processing Impact Real-time metrics such as Time to First Token (TTFT) and Inter-Token Latency (ITL) were used to evaluate latency. Both GPUs experienced minor latency increases in TEE mode, with the H200 displaying slightly higher overheads. However, as model size and sequence length grew, these latency effects diminished. The Llama-3.1-70B model, for instance, saw a TTFT overhead of -0.41% on the H100 and 3.75% on the H200. This indicates that TEE mode remains suitable for real-time applications, especially for larger, computation-intensive models where latency becomes less of a limiting factor. 5 ) High Throughput and Load Capacity for Secure AI Queries Both TEE-enabled GPUs demonstrated substantial throughput and query load capacity, achieving nearly 130 tokens per second (TPS) on medium-sized inputs with the H100, while the H200 achieved comparably high TPS and QPS. Engage Stack’s cloud infrastructure was crucial in assessing these real-world, high-query load scenarios, affirming the H100 and H200’s capabilities in handling secure AI processing without significant bottlenecks. Model Comparison and Main Takeaways Across different models and workloads, the study underscores that TEE’s impact on performance remains under 7% for typical LLM tasks. Larger models, particularly with longer sequences, saw the TEE-related overhead diminish to nearly zero. For instance, the largest model tested, Llama-3.1-70B, displayed negligible performance impact, reinforcing TEE’s applicability for large-scale, sensitive applications. The study found that, despite the H200 experiencing slightly higher overhead than the H100, both GPUs maintained robust performance under TEE. This distinction between the H100 and H200 results suggests that the H100 may be better suited for highly latency-sensitive applications, while the H200 remains a strong choice for secure, high-performance computing where the emphasis is on query load and large model processing. Practical Implications for AI Using TEE The findings confirm that TEE-enabled NVIDIA Hopper GPUs are effective for organizations prioritizing data security alongside computational efficiency. TEE mode proves manageable even for real-time applications, particularly on the H100, which manages latency effectively. With Engage Stack’s essential support, this research affirms that TEE can protect sensitive data without sacrificing scalability or throughput, especially for applications in sectors like finance, healthcare, and decentralized AI. Conclusion As the need for secure data handling grows, TEE-enabled NVIDIA H100 and H200 Hopper GPUs provide both security and efficiency, especially for complex LLM workloads. While the H200 exhibits a slightly higher performance overhead, both GPUs demonstrate that TEE can be implemented effectively without compromising throughput, particularly as model size and token lengths increase. This research validates the use of TEE in real-world, high-performance AI applications across fields that demand both confidentiality and processing power, supporting the broader adoption of secure, decentralized AI. For more in-depth information visit the benchmark research .
A threat actor has been using the promise of investments to trick users into handing over wallet permissions. The newly discovered scam uses elements of social engineering, pig butchering, and laundering funds through stablecoins. The attacker extracted about $1.2M from user wallets through social engineering tactics. The newly discovered scam was noticed by Whitestream analysts. The funds have not been tracked in detail, but Whitestream notes most were directed to a single wallet before they were sent to exchanges. Threat actor offers shady investments in confidence scams The attacker’s method of stealing funds copies romance scams or pig butchering models, which relies on gaining the victim’s confidence. The end goal is to either request crypto directly or introduce a malicious link. While wallets can flag some sites, they are not filtering third parties yet. This allows anyone to build a wallet connection request and potentially drain funds. The scam led users to a site presented as an investment portal for Seed Crypto. The threat page is still active, displaying a basic message and a button to connect wallets. The landing page explained crypto in a language targeting outsiders while promising a vague investment opportunity. The page required a wallet connection, which then used the permission to drain wallets. The site required a WalletConnect or a Coinbase wallet, one of the most widely used apps. See also Why UK Gen Zers favour brick-and-mortar banking in an era dominated by digital finance Early details revealed about the scam reinforce the regional nature of attacks and their limited time frame. In this case, the threat actor operated out of Southeast Asia, and focused on local services for cashing out. The exploiting address, however, had no problems with swapping out funds through HTX, Binance, OKX, Gate.IO, and ChangeNow. Pig butchering and confidence scams are among the most closely watched, as they often target mainstream users and not crypto insiders. However, due to the ease of acquiring crypto or stablecoins, scammers are capable of convincing users to hand over or “invest” funds. Both Tether and Circle have assisted law enforcement with tracking and freezing pig butchering addresses, while they were still incapable of cashing out. Personal message scams took up to $3.6B in 2024 Confidence scams targeting crypto outsiders surpassed losses from attacks against crypto protocols. It is difficult to track confidence scams, as some are regional and limited to a campaign. However, an estimated $3.6B was lost and laundered through this type of scam as revealed by data from a preliminary Cyvers overview for the past year. Over the course of 2024, the influence of the Huione Guarantee market was noted as a tool to launder funds through faked commercial activity. The main tools for moving funds were again USDT and USDC, which despite attempts to freeze wallets, managed to remain undiscovered. See also Aave considers integrating Chainlink's Smart Value Recapture to redirect MEV profits back to DeFi users As this type of scam became more common, Interpol called for removing the “pig butchering” term, to avoid stigma and help victims seek help without shame. Some of the scams were considered romance-baiting, while others still had an element of confidence. Both eventually led up to the same point – investment offers. Confidence scams caused a lot of devastation this year, causing deep losses because they typically target individuals with disposable funds. The US Securities and Exchange Commission (SEC) estimates total confidence scams at $5.6B for the whole of 2023. Crypto and stablecoins only accelerate the process and make the funds potentially untraceable. A Step-By-Step System To Launching Your Web3 Career and Landing High-Paying Crypto Jobs in 90 Days.
Decentralized Physical Infrastructure Networks (DePin) are transforming the tech by enabling decentralized projects in real-world infrastructure. Here’s what happened in the DePin sector recently: io.net partners with Zerebro for AI development, MapMetrics released a roadmap for a future token listing, and Fluence gained two new early adopters for its cloudless virtual machine program. io.net Partners with Zerebro io.net, a DePIN GPU compute network, partnered with autonomous AI agent Zerebro last week, according to a press release shared with BeInCrypto. Zerebro will use io.net’s decentralized compute resources to enhance its Ethereum validator, which will help integrate AI and Blockchain technology. “This collaboration… marks an exciting step forward for autonomous agents and decentralized AI in general. Zerebro can build with io.net’s permissionless and globally distributed compute network, ensuring it has the ability to continuously sustain operations and to keep innovating,” Tausif Ahmed, Chief Business Development Officer at io.net, told BeInCrypto. In recent months, io.net has engaged in several prominent DePin/AI partnerships. For example, it worked with TARS Protocol to reduce AI model training costs by 30% in September, and joined with Zero1 Labs to advance decentralized AI development in November. In conducted a similar collaboration with OpenLedger earlier this month. MapMetrics Announces Path to Token Listing MapMetrics, a “drive-to-earn” navigation app, and GameFi token earner, recently released a roadmap for a token listing. This news came on a recent blog post recounting some of the company’s successes in the past year. Especially prominent was its incorporation into peaq’s DePin “peaqosystem” this August, a pivotal building block for MapMetrics’ token launch. “While none of the projects building on peaq have launched their tokens yet, we are thrilled to announce that MapMetrics has been nominated as one of the very first projects to do so. This recognition underscores the trust in our project and its potential. Their commitment… ensures that we are well-prepared for a strong post-launch trajectory.” the post claimed. Although MapMetrics listed its impending token launch as the most important development, several other accomplishments also received special attention. For example, in October, the firm released several major updates focused on increasing the quality of use for its navigation features. By prioritizing this over GameFi functions, MapMetrics plans for long-term growth. Fluence Gains Two Early Adopters Finally, cloudless internet provider Fluence announced that two firms, RapidNode and Supernoderz by Spheron, will join its DePin Cloudless VM (Virtual Machine) Alpha Testing Program. Fluence shared this via a press release with BeInCrypto, and the firm’s representatives seemed quite enthusiastic. “Around 90% of blockchain protocol nodes and validators are hosted in the centralized cloud. We believe that for Web3 to fulfill the promise of decentralization, it must require its underlying infrastructure to be decentralized. With the launch of VMs and our first partners, Fluence is finally making this possible,” claimed Tom Trowbridge, Co-Founder of Fluence. Fluence states that its cloudless virtual machines will eliminate the pain points of traditional cloud-based VMs. The company was sparse on details regarding the exact mechanisms of these upcoming products but claimed they are capable of deploying workloads at up to 75% lower prices than traditional cloud providers.
Charles Hoskinson responds to Rick McCracken’s concerns about Cardano’s partnership challenges. Hoskinson clarifies disagreements with the Cardano Foundation over governance and accountability. Despite tensions, Hoskinson emphasizes the ecosystem’s growth and calls for unity in 2025. Charles Hoskinson, the CEO of Input Output (IO) and founder of Cardano, has taken to social media to address concerns within the Cardano ecosystem. Broadcasting from his office in Colorado, Hoskinson reflected on a tumultuous year and previewed his upcoming projects, including a Darkness Retreat and new ventures in South America. However, the focus of his message was a public response to comments made by Rick McCracken, a long-time friend and participant in the Cardano ecosystem. McCracken’s Concern: Can the Cardano Ecosystem Collaborate with External Partners? McCracken, who has been an active voice within the Cardano community, expressed concerns about the future of partnerships within the ecosystem. In a recent post, McCracken questioned the ability of the Cardano Foundation (CF) and IO to build lasting professional relationships, citing ongoing disagreements between the two organizations. Specifically, he noted that if internal collaboration between key ecosystem players like Hoskinson, Tam, and Fred could not be achieved, it would be challenging to expect successful partnerships with organizations outside the ecosystem. Hoskinson Clarifies the Cardano Foundation Disagreement: A Matter of Governance In his video response, Hoskinson clarified the long-standing disagreements between IO and the Cardano Foundation. He highlighted a fundamental philosophical divide regarding the governance structure of the ecosystem. Hoskinson believes the Cardano Foundation should be a community-oriented organization with leadership elected by the community. Meanwhile, the Cardano Foundation has consistently maintained that it will never have community-elected leadership. This stance is one that Hoskinson and many within IO believe undermines the decentralized vision for Cardano. For Hoskinson, the issue isn’t simply internal politics. It’s about the accountability of the Cardano Foundation to the community. He pointed out that the $600 million in funds controlled by the Cardano Foundation represents the community’s money. As such, there should be mechanisms for oversight and accountability. Without community input, Hoskinson argued, the ecosystem risks making decisions that are not aligned with its foundational principles. Progress Within the Ecosystem and Future Partnerships: A Look at Cardano’s Growth While acknowledging the ongoing conflict with the Cardano Foundation, Hoskinson stressed that the broader Cardano ecosystem continues to thrive. He pointed to numerous developments within the ecosystem. They include the growth of Cardano’s DeFi sector, the expansion of its meme coin ecosystem, and the continuous improvement of its developer experience (DevX) through tools like Plutus and Plutus V4. Hoskinson also highlighted the increasing number of opportunities for Cardano in the wider blockchain space. He cited successful engagements with major players like Microsoft Azure, Flare, and Hashgraph. Despite philosophical disagreements, he noted that the technology and community behind Cardano remain strong. There is growing participation and increasing visibility at major events like Token 2049, Consensus 2025, and Bitcoin 2025. A Call for Growth and Unity in 2025: Focusing on the Bigger Picture Despite the frustration voiced in his response, Hoskinson called for the Cardano community to focus on the bigger picture. He urged members to rise above petty squabbles and participate in the on-chain governance system that empowers the community to shape the future of Cardano. He expressed confidence that 2025 could be a pivotal year for the ecosystem if the community and its leaders could put aside differences and focus on building and innovating together. “2025 can be a great year if we want it to be. We need to focus on the bigger picture—the on-chain governance system,” Hoskinson stated. Disclaimer: The information presented in this article is for informational and educational purposes only. The article does not constitute financial advice or advice of any kind. Coin Edition is not responsible for any losses incurred as a result of the utilization of content, products, or services mentioned. Readers are advised to exercise caution before taking any action related to the company.
Lahore, Pakistan, December 20th, 2024, Chainwire O.XYZ , the leading decentralized Super AI project, announces the launch of OSOL100 , a first-of-its-kind AI index token designed to capture the cumulative value of Solana’s top 100 AI projects. This innovative token provides users with direct exposure to Solana’s AI infrastructure, agents, and meme tokens, all through one easily managed and fully transparent investment tool. OSOL100 simplifies investment strategies while enhancing portfolio diversification. It tracks and represents the performance of the top 100 AI-focused projects within Solana’s thriving ecosystem, offering accessibility to the most promising developments. Each OSOL100 token functions as a decentralized share of the fund, hosted on DAOS.fun, providing proportional exposure to its assets. Launched by O.XYZ, OSOL100, OSOLDOCS aligns with the company’s mission to create the world’s first Sovereign Super AI — an AI owned and governed by the community to benefit humanity. Powered by SuperMissO, the first AI CEO in development, OSOL100 embodies O.XYZ’s vision of an autonomous, community-led future. OBOT token holders gain exclusive access to OSOL100, enhancing the value and utility of their existing holdings. About O.XYZ O.XYZ aims to reshape artificial intelligence by developing systems independent of corporate control. It focuses on making AI technology accessible, transparent, and community-driven, ensuring superintelligence serves humanity’s interests. O.XYZ’s technical foundation centers on building an AI ecosystem designed to be shutdown-resistant and self-led. Their key initiatives include developing ‘Sovereign Super intelligence,’ creating decentralized infrastructure, and researching hyper-fast AI systems. The project operates under the O.Systems Foundation, led by Ahmad Shadid. Shadid, who previously founded IO.NET– a $3B Solana DePIN — brings his experience to O.XYZ’s work on building an autonomous, community-led AI ecosystem. Contact VP Biz Dev Hassan Tariq O.XYZ [email protected]
Cardano’s research agenda prioritizes scaling Ouroboros for higher transaction throughput and efficient processing across a growing blockchain network. The Tokenomicon initiative explores flexible economic models, leveraging native assets and Babel fees for enhanced blockchain financial frameworks. Global Identity integration within Cardano aims to enhance transaction interoperability, governance functionality, and smart contract compatibility. Cardano, known for its scientific approach and peer-reviewed methodology, has announced its Strategic Research Agenda to guide blockchain advancements over the next decade. According to Input Output (IO), the research entity behind Cardano, the agenda highlights nine key thematic areas aimed at addressing critical challenges and opportunities in blockchain technology. To deliver on the full promise of blockchain, Input | Output Research is advancing a Strategic Research Agenda through 9 thematic focus areas. From scaling the Ouroboros protocol stack, to building a next-level identity and credential layer, and enabling seamless interchain… pic.twitter.com/RVzFEmOels — Input Output (@InputOutputHK) December 18, 2024 The announcement, shared on the official Input Output Research handle, sets a forward-looking vision for the Cardano ecosystem. The agenda begins with “The World’s Operating System,” an initiative to enhance Cardano’s infrastructure for efficient and secure smart contract development. The goal is to enable a robust framework that supports a broad range of decentralized applications (dApps) and services. Complementing this, the Ouroboros protocol stack will undergo scaling improvements to handle the increasing transaction volume as the Cardano network expands. Another focus area, Tokenomicon, targets the economic mechanisms within the blockchain. Cardano aims to optimize its tokenomics by researching the financial models that govern blockchain ecosystems. With features such as native user-defined assets and the Babel fee system, Cardano is positioning itself to explore flexible payment options and strengthen its economic framework. The agenda also prioritizes Global Identity, embedding identity solutions into Cardano’s core functionalities. This integration enhances the compatibility of transactions, governance, and smart contracts, making them interoperable across the broader ecosystem. Democracy 4.0, another key initiative, seeks to secure voting mechanisms and incentivize participation in governance. Cardano’s scalability is being addressed through Hydra , a protocol designed to optimize transaction throughput while reducing costs and latency. Interchains will expand Cardano’s cross-chain capabilities, allowing developers to build multi-chain dApps within a secure environment. Finally, the agenda focuses on advanced cryptographic solutions, including zero-knowledge proofs (ZK) and research into the post-quantum era, ensuring long-term security for blockchain applications. ETHNews reports that this comprehensive roadmap reflects Cardano’s commitment to driving blockchain adoption through scalable, secure, and interoperable solutions. The outlined themes provide a foundation for future development, solidifying Cardano’s role as a leader in blockchain innovation while addressing the practical needs of developers and enterprises. The current price of Cardano (ADA) is approximately $0.977, showing a daily increase of 0.83%. Over the past month, ADA has gained 33.61%, contributing to a strong 64.65% year-to-date growth. However, it remains below its all-time high of $3.16, reflecting a gradual recovery within the broader cryptocurrency market. Cardano’s market capitalization is approximately $34.35 billion, with a trading volume of $2.26 billion in the last 24 hours. Active network developments, including the recent Strategic Research Agenda targeting advancements in its Ouroboros protocol and tokenomics, support ADA’s long-term prospects. Key resistance lies near $1.03, while support around $0.95 could stabilize any retracements. Recent whale accumulation and positive sentiment toward Cardano’s ecosystem add to its bullish outlook in the coming months.
Hong Kong’s Securities and Futures Commission has licensed four new cryptocurrency exchanges: Accumulus GBA Technology, DFX Labs, Hong Kong Digital Asset EX, and Thousand Whales Technology. According to local reports , the approvals were announced today and it raises the total number of licensed virtual asset trading platforms in Hong Kong to a total of seven. They were issued under Hong Kong’s wider plan to strengthen its rules on virtual assets and, in the process, boost the competitiveness of the city as a global digital asset center. The SFC said its “swift licensing process” helped speed up the approvals while ensuring they met the rules. “We aim to strike a balance between safeguarding the interests of investors and facilitating continuous development for the virtual asset ecosystem in Hong Kong,” said Eric Yip, SFC executive director of intermediaries. The approvals come at a time when Bitcoin’s price is booming, having increased by over 60% in the last six months and recently going over $100,000 for the first time. The SFC needs the new licensed platforms to finish extra tasks, including checking for weaknesses and conducting tests by independent groups, before they can fully operate. Thousand Whales Technology is the operator of the EX.IO trading platform. The company is backed by Valuable Capital Group, a brokerage owned by Sina Corporation, which operates China’s popular social media site Weibo. The four exchanges were among almost 30 firms to have applied for VATP licenses in 2024, though some platforms, including OKX and HTX, have since dropped their applications due to regulatory issues. Meanwhile, Hong Kong has previously licensed three platforms: HashKey, OSL, and HKVAX. Earlier this year, the city launched Asia’s first exchange-traded funds (ETFs) for spot Bitcoin and Ether, beating the United States for the same. However, the city of Hong Kong has faced a lot of difficulties in properly regulating over-the-counter (OTC) crypto trading and is now changing its oversight approach after listening to industry feedback. Follow The Crypto Times on Google News to Stay Updated!
Money talks. In cryptocurrency, it screams through megaphones and flies banners across stadium skies. The recent revelation of Polkadot’s $37 million marketing spend has reignited a familiar debate within the blockchain community. Their aggressive growth strategy, complete with influencer campaigns and sports sponsorships, mirrors a pattern seen throughout the industry’s evolution. Cryptocurrency projects have long walked a tightrope between building awareness and maintaining credibility. Some call it growth hacking. Others label it desperation. The truth lies somewhere in between, hidden in the spreadsheets of marketing budgets and community engagement metrics. For an industry built on transparency, the methods behind crypto marketing often remain surprisingly opaque. Yet Polkadot’s recent treasury report has inadvertently pulled back the curtain, offering a rare glimpse into the real costs of chasing growth in Web3. So that begs the question… Is Crypto All About the Hype? The crypto industry thrives on promises. Projects launch daily, each claiming revolutionary technology and groundbreaking solutions. Marketing teams craft elaborate narratives about mass adoption and industry disruption. Development roadmaps stretch years into the future while promotion budgets drain treasuries today. Behind every blockchain project stands an army of social media managers, content creators, and community moderators. They craft narratives, manage expectations, and drive engagement. Marketing budgets often dwarf technical spending. Growth metrics become more important than GitHub commits. The industry measures success through Twitter followers rather than transaction volumes. Yet this focus on hype serves a purpose. Early adoption requires awareness and communities require nurturing in order to build a foundation. In an industry built on network effects, attention drives value. That’s why smart projects leverage this dynamic, using strategic marketing to build genuine communities. Conversely, others simply throw money at short-term solutions, hoping quantity will translate into quality. The difference lies in execution. Successful projects blend marketing prowess with technological substance. They understand hype’s role in driving adoption while maintaining focus on development. Their marketing spend reflects strategic thinking rather than desperate attempts at relevance. The best teams recognize that sustainable growth requires more than just flashy campaigns and influencer endorsements. However, recent events have pulled back the curtain on crypto’s marketing machinery, exposing the true cost of chasing growth at any price. When Marketing Millions Miss Their Mark Polkadot’s treasury report landed like a bombshell in June. The blockchain project spent $37 million on marketing in early 2024, nearly double its development budget. Community members watched in disbelief as the numbers painted a stark picture of modern crypto marketing — one where promotion overshadows product development and short-term visibility trumps long-term value creation. The granular details of Polkadot’s spending revealed deeper systemic issues within crypto marketing practices. Their influencer campaigns targeting North America and Europe consumed substantial portions of the budget, with each month-long promotion costing roughly $300,000. Initial metrics appeared promising, boasting millions of content views and hundreds of thousands of engagements. Yet beneath these surface-level statistics lurked troubling patterns of artificial inflation and questionable value. Investigation into these marketing initiatives uncovered a complex web of suspicious activities. YouTube channels materialized overnight with implausible subscriber counts, while Twitter profiles coordinated identical content streams across networks of bot-driven accounts. Key opinion leaders selected for premium partnerships often displayed signs of manufactured engagement, their follower counts inflated and their content engagement metrics artificially enhanced through coordinated automation. Polkadot’s broader spending choices raised fundamental questions about value creation in the blockchain space. Their treasury allocated $450,000 for event expenses while community-driven initiatives struggled for basic funding. Premium partnerships consumed resources at an alarming rate, including $480,000 for a two-year logo display on Coinmarketcap and $180,000 for private jet branding. These decisions occurred against a backdrop of stagnant token prices and slowing ecosystem development. The project’s marketing strategy exemplifies a growing disconnect between spending and substance in crypto promotion. While traditional marketing metrics showed surface-level success, the deeper analysis revealed concerning patterns of inefficiency and waste. Their treasury, currently projected to last another two years at current spending rates, faces mounting pressure from community members questioning the return on these substantial investments. The situation highlights a critical challenge facing blockchain projects: distinguishing between meaningful growth initiatives and expensive exercises in vanity metrics. The Missing Link Between PR and Growth Public relations in cryptocurrency often plays second fiddle to aggressive growth tactics. Marketing teams chase viral moments and influencer endorsements while overlooking the fundamentals of strategic communication. This approach stems from the industry’s obsession with immediate results, yet misses crucial opportunities for sustainable growth. Traditional PR brings subtle but significant advantages to blockchain projects. While sponsored posts generate quick spikes in attention, carefully crafted media relationships build lasting credibility. Industry publications value authenticity over paid placement. Journalists seek genuine innovation rather than promotional noise. These relationships become invaluable during critical moments, from product launches to crisis management. Most crypto projects struggle to balance immediate visibility with long-term reputation building. Marketing budgets flow freely toward quantifiable metrics like social media engagement and website traffic. Meanwhile, PR initiatives that could strengthen market position and industry standing receive minimal attention. This imbalance creates vulnerability, leaving projects ill-equipped to handle scrutiny or navigate market downturns. Successful blockchain projects understand the symbiotic relationship between growth hacking and public relations. They recognize that while aggressive marketing drives initial interest, strategic PR sustains momentum through market cycles. Their communication strategies blend traditional media outreach with innovative community engagement. Press releases complement Twitter spaces. Media tours enhance Discord announcements. You can see the distinction clearly during market turbulence. Projects built on pure hype crumble under pressure, their communities scattering at the first sign of trouble. Those with strong PR foundations weather storms more effectively, maintaining stakeholder confidence through clear communication and established media channels. Their prior investment in relationship building pays dividends when market sentiment shifts. Smart teams recognize that effective PR extends beyond press releases and media mentions. It encompasses community management, developer relations, and stakeholder communication. This comprehensive approach creates resilience, enabling projects to maintain momentum even when marketing budgets tighten or market conditions deteriorate. Growth Hack the Right Way The cryptocurrency industry stands at a crossroads between hype-driven marketing and sustainable growth strategies. Projects rushing toward quick wins through influencer campaigns and paid promotions often find themselves building on shifting sands. Real growth demands more than viral moments and sponsored content. It requires strategic communication, genuine community building, and balanced resource allocation. Smart projects recognize this evolution in crypto marketing. They understand that tomorrow’s leaders will master the delicate balance between innovative growth tactics and time-tested PR fundamentals. Sustainable success in blockchain requires more than just spending power — it demands strategic vision, authentic communication, and unwavering commitment to genuine value creation. About the Author Jamie Kingsley is a prominent figure in the crypto PR industry, serving as the COO and Co-Founder of The PR Genius (PRG). He has played a crucial role in transforming PRG from a small, niche firm into a multi-service growth marketing agency. Kingsley’s strategic leadership facilitated a successful pivot from lead generation to public relations, enabling the agency to work with high-profile clients such as IO.net, Yellowheart, Radix, Movement Labs and RTFK Studios. In addition to his role at PRG, Kingsley is a Board Member of the Asia Web3 Alliance Japan, where he contributes to the advancement of decentralized internet initiatives in a rapidly growing blockchain market. His expertise in media strategy and growth hacking has positioned him as a key influencer in the crypto space, recognized for his adaptability and resilience in navigating the industry’s challenges.
Last updated: December 18, 2024 11:16 EST The Hong Kong Securities and Futures Commission (SFC) has officially approved four new virtual asset trading platform (VATP) providers, significantly expanding the region’s regulatory framework for virtual assets. The newly approved entities—Hong Kong Digital Asset EX Limited (HKbitEX), Accumulus GBA Technology (Hong Kong) Co., Limited (Accumulus), DFX Labs Company Limited, and Thousand Whales Technology (BVI) Limited (EX.IO)—join three previously licensed platforms, bringing the total number of authorized providers to seven. Source: SFC.hk Hong Kong Virtual Asset Providers: Has Licensing Gotten Easier? Adding these four VATPs aligns with Hong Kong’s regulatory body’s objectives of enhancing investor protection and maintaining market integrity through transparent regulations. The licensed platforms are required to adhere to stringent compliance measures, including anti-money laundering protocols, robust cybersecurity systems, and transparency in operations. Among the newly approved platforms, HKbitEX and Accumulus have already garnered attention within the Hong Kong community for their innovative approaches to digital asset trading. HKbitEX offers advanced over-the-counter (OTC) trading solutions to bridge the gap between institutional and retail investors. Accumulus, on the other hand, offers crypto trading but emphasizes seamless integration into Hong Kong’s traditional financial systems. These platforms, along with DFX Labs and Thousand Whales, are expected to operate maximally in alignment with the regulatory rules. The SFC’s licensing process is meticulous. It thoroughly evaluates each applicant’s business model, governance structure, and compliance capabilities. Growing Virtual Asset Ecosystem In Hong Kong The expansion of licensed VATPs in Hong Kong is a pivotal moment for the global virtual asset ecosystem, particularly for the country itself. It is a shift towards greater regulatory acceptance and integration of digital assets into mainstream financial markets. The increased number of licensed platforms provides more choices and greater security assurance for investors. Licensed VATPs are held to high standards of operation, reducing the risks associated with unregulated platforms. Despite these advancements, challenges remain. While licensing VATPs is a step in the right direction, sustained efforts are needed to constantly educate investors about the benefits and risks of crypto trading. According to a report on December 17, Hong Kong will adopt the OECD’s Crypto-Asset Reporting Framework (CARF) to enhance tax transparency and combat cross-border tax evasion. 🇭🇰 Hong Kong is set to implement the Crypto Asset Reporting Framework by 2026, enhancing tax transparency and tackling cross-border tax evasion in the crypto space! #Crypto #Tax https://t.co/MU2Cg6ac0D — Cryptonews.com (@cryptonews) December 17, 2024 The CARF was introduced in June 2023. It extends the Common Reporting Standard (CRS) to crypto assets and mandates annual account and transaction information exchanges between jurisdictions. Hong Kong plans to complete legislative amendments by 2026, with the first automatic data exchanges scheduled for 2028. This initiative builds on Hong Kong’s history of financial data exchange under the CRS since 2018 and aims to address the complexities of the rapidly evolving crypto market. Also, Hong Kong is accelerating efforts to establish itself as a global crypto hub by introducing a fast-track licensing process for trading platforms . Joseph Chan, Acting Secretary for Financial Services and the Treasury announced that the Securities and Futures Commission (SFC) plans to operationalize a consultative panel early next year to support licensed platforms. Since the crypto licensing regime began in June 2023, firms like OSL Exchange and HashKey Exchange have gained approval to serve retail investors. Alongside licensing, Hong Kong is advancing legislation to regulate stablecoin issuers. The Hong Kong Monetary Authority (HKMA) will license fiat-referenced stablecoins following global trends.
The leading provider of decentralized GPU computing solutions, io.net , has been approved into the Dell Technologies Partner Program as a Dell Technologies Authorized Partner and Cloud Service Provider. This is a significant accomplishment for the firm. io.net will be able to deliver scalable and cost-effective solutions for artificial intelligence (AI), machine learning (ML), and high-performance computing (HPC) workloads as a result of this move, which will combine the GPU network of io.net with the world-class infrastructure of Dell. Through its participation in the Dell Partner Program, io.net is able to receive access to the resources, knowledge, and go-to-market capabilities of Dell Technologies. Through this, businesses who are looking for sophisticated solutions to manage difficult computing issues will be able to get help. This will bridge the gap between decentralized GPU power and the trusted hardware infrastructure of Dell. Tausif Ahmed, VP of Business Development at io.net, commented: “Joining the Dell Technologies Partner Program is an important step for io.net. It supports our goal of delivering solutions that integrate our decentralized GPU platform with Dell’s reliable infrastructure, helping businesses address their computing challenges more efficiently and cost-effectively Together, we look forward to delivering practical, enterprise-grade solutions tailored for the next generation of AI innovation.” Through its participation in the Dell Technologies Partner Program, io.net will work together with Dell Technologies on go-to-market activities, demand creation, and co-marketing endeavors. This makes it possible for business clients to deploy solutions that blend dispersed GPU power in a smooth manner with hardware that is stable and resilient from Dell Technologies. Io.net is in a strong position to assist in making decentralized compute solutions more accessible across a variety of businesses because it is able to use the enormous ecosystem that Dell has created. The proliferation of applications that use artificial intelligence and machine learning has increased the need for compute solutions that are both scalable and inexpensive. Traditional centralized cloud providers often fail to satisfy the requirements of contemporary businesses because they are bound by high prices, limited flexibility, and resource constraints. io.net’s decentralized GPU network is designed to overcome these difficulties by obtaining processing capacity from a worldwide network of dispersed GPUs and then clustering those GPUs into a single, high-performance infrastructure. Following io.net’s entry into the Dell Technologies Partner Program, customers will be able to take advantage of GPU clusters that are available on demand and are able to scale to meet the needs of whole enterprises. In addition to this, they will see considerable cost savings in comparison to centralized solution providers. While this is going on, a seamless interface with Dell’s advanced hardware will enable workloads that are both dependable and high performance. A significant step forward in the process of democratizing access to decentralized computing is represented by the cooperation between io.net and Dell Technologies. This is especially true for enterprises who are working on AI training, inference, and high-performance compute use cases. io.net is well positioned to accelerate the adoption of decentralized compute solutions while also fulfilling the performance criteria that organizations anticipate. This is made possible by utilizing Dell’s worldwide presence and enterprise trust.
December 19, 2024 – Dubai, United Arab Emirates Io.net – the leading provider of decentralized GPU computing solutions – has been accepted to join the Dell Technologies Partner Program as a Dell Technologies authorized partner and cloud service provider. The move will combine io.net’s GPU network with Dell’s world-class infrastructure, delivering scalable and cost-effective solutions for AI, ML (machine learning) and HPC (high-performance computing) workloads. By joining Dell’s Partner Program, io.net gains access to Dell Technologies’ resources, expertise and go-to-market capabilities. This will support enterprises seeking advanced solutions to handle complex computing challenges, bridging decentralized GPU power with Dell’s trusted hardware infrastructure. Tausif Ahmed, vice president of business development at io.net, said, “Joining the Dell Technologies Partner Program is an important step for io.net. “It supports our goal of delivering solutions that integrate our decentralized GPU platform with Dell’s reliable infrastructure, helping businesses address their computing challenges more efficiently and cost-effectively. “Together, we look forward to delivering practical, enterprise-grade solutions tailored for the next generation of AI innovation.” As part of the Dell Technologies Partner Program, io.net will collaborate on go-to-market efforts, demand generation and co-marketing initiatives. This enables enterprise customers to deploy solutions that seamlessly integrate decentralized GPU power with robust, dependable hardware from Dell Technologies. By tapping into Dell’s extensive ecosystem, io.net is well-positioned to make decentralized compute solutions more accessible across multiple industries. The rise of AI and ML applications has amplified demand for scalable and affordable compute solutions. Traditional centralized cloud providers often fall short in meeting the needs of modern enterprises, constrained by high costs, limited flexibility and resource bottlenecks. Io.net’s decentralized GPU network addresses these challenges by sourcing computational power from a global network of distributed GPUs and clustering them into a unified, high-performance infrastructure. Following io.net’s admission to the Dell Technologies Partner Program, clients will benefit from on-demand GPU clusters capable of scaling to enterprise requirements. They will also enjoy significant cost reductions compared to centralized providers. Seamless integration with Dell’s advanced hardware, meanwhile, will support reliable, high-performance workloads. The collaboration between io.net and Dell Technologies represents a step forward in democratizing access to decentralized compute – particularly for organizations tackling AI training, inference and HPC use cases. By leveraging Dell’s global presence and enterprise trust, io.net is poised to accelerate adoption of decentralized compute solutions while meeting the performance standards enterprises expect. About io.net Io.net is a decentralized distributed compute network that enables ML engineers to deploy a GPU cluster of any scale within seconds at a fraction of the cost of centralized cloud providers. Io.net sources compute resources from multiple locations and deploys them into a single cluster at massive scale. Io.net has successfully supported training, fine tuning and inference for a wide range of ML models. Contact Dan Edelstein , MarketAcross
Delivery scenarios