blockchain – Finematics https://finematics.com decentralized finance education Mon, 02 Aug 2021 16:23:54 +0000 en-GB hourly 1 https://wordpress.org/?v=5.8.1 https://finematics.com/wp-content/uploads/2017/09/cropped-favicon-32x32.png blockchain – Finematics https://finematics.com 32 32 Rollups – The Ultimate Ethereum Scaling Solution https://finematics.com/rollups-explained/?utm_source=rss&utm_medium=rss&utm_campaign=rollups-explained&utm_source=rss&utm_medium=rss&utm_campaign=rollups-explained https://finematics.com/rollups-explained/#respond Mon, 02 Aug 2021 16:23:52 +0000 https://finematics.com/?p=1390

So what are rollups all about? What’s the difference between optimistic and ZK rollups? How is Arbitrum different from Optimism? And why are rollups considered to be the holy grail when it comes to scaling Ethereum? You’ll find answers to these questions in this article.

Intro

Ethereum scaling has been one of the most discussed topics in crypto. The scaling debate usually heats up during periods of high network activity such as the CryptoKitties craze in 2017, DeFi Summer of 2020 or the crypto bull market at the beginning of 2021. 

During these periods, the unparalleled demand for the Ethereum network resulted in extremely high gas fees making it expensive for everyday users to pay for their transactions. 

To tackle this problem, the search for the ultimate scaling solution has been one of the top priorities for multiple teams and the Ethereum community as a whole. 

In general, there are 3 main ways to scale Ethereum or in fact, most other blockchains: scaling the blockchain itself – layer 1 scaling; building on top of layer 1 – layer 2 scaling and building on the side of layer 1 – sidechains. 

When it comes to layer 1, Eth2 is the chosen solution for scaling the Ethereum blockchain. Eth2 refers to a set of interconnected changes such as the migration to Proof-of-Stake (PoS), merging the state of the Proof-of-Work (PoW) blockchain into the new PoS chain and sharding. 

Sharding, in particular, can dramatically increase the throughput of the Ethereum network, especially when combined with rollups. 

If you’d like to learn more about Eth2 you can check out this article here.   

When it comes to scaling outside of layer 1, multiple different scaling solutions have been tried with some mixed results. 

On the one hand, we have layer 2 solutions such as Channels that are fully secured by Ethereum but work well only for a specific set of applications. 

Sidechains, on the other hand, are usually EVM-compatible and can scale general-purpose applications. The main drawback – they are less secure than layer 2 solutions by not relying on the security of Ethereum and instead having their own consensus models. 

Most rollups aim at achieving the best of these 2 worlds by creating a general-purpose scaling solution while still fully relying on the security of Ethereum.

This is the holy grail of scaling as it allows for deploying all of the existing smart contracts present on Ethereum to a rollup with little or no changes while not sacrificing security. 

No wonder rollups are probably the most anticipated scaling solution of them all. 

But what are rollups in the first place? 

Rollups 

A rollup is a type of scaling solution that works by executing transactions outside of Layer 1 but posting transaction data on Layer 1. This allows the rollup to scale the network and still derive its security from the Ethereum consensus. 

Moving computation off-chain allows for essentially processing more transactions in total as only some of the data of the rollup transactions has to fit into the Ethereum blocks. 

To achieve this, rollup transactions are executed on a separate chain that can even run a  rollup-specific version of the EVM. 

The next step after executing transactions on a rollup is to batch them together and post them to the main Ethereum chain. 

The whole process essentially executes transactions, takes the data, compresses it and rolls it up to the main chain in a single batch, hence the name – a rollup. 

Although this looks like a potentially good solution, there is a natural question that comes next: 

“How does Ethereum know that the posted data is valid and wasn’t submitted by a bad actor trying to benefit themselves?” 

The exact answer depends on a specific rollup implementation, but in general, each rollup deploys a set of smart contracts on Layer 1 that are responsible for processing deposits and withdrawals and verifying proofs. 

Proofs are also where the main distinction between different types of rollups comes into play. 

Optimistic rollups use fraud proofs. In contrast, ZK rollups use validity proofs.

Let’s explore these two types of rollups further. 

Optimistic Vs ZK Rollups

Optimistic rollups post data to layer 1 and assume it’s correct hence the name “optimistic”. If the posted data is valid we are on the happy path and nothing else has to be done. The optimistic rollup benefits from not having to do any additional work in the optimistic scenario.

In case of an invalid transaction, the system has to be able to identify it, recover the correct state and penalize the party that submits such a transaction. To achieve this, optimistic rollups implement a dispute resolution system that is able to verify fraud proofs, detect fraudulent transactions and disincentivize bad actors from submitting other invalid transactions or incorrect fraud proofs. 

In most of the optimistic rollup implementations, the party that is able to submit batches of transactions to layer 1 has to provide a bond, usually in the form of ETH. Any other network participant can submit a fraud proof if they spot an incorrect transaction. 

After a fraud proof is submitted, the system enters the dispute resolution mode. In this mode, the suspicious transaction is executed again this time on the main Ethereum chain. If the execution proves that the transaction was indeed fraudulent, the party that submitted this transaction is punished, usually by having their bonded ETH slashed. 

To prevent the bad actors from spamming the network with incorrect fraud proofs, the parties wishing to submit fraud proofs usually also have to provide a bond that can be subject to slashing.

In order to be able to execute a rollup transaction on layer 1, optimistic rollups have to implement a system that is able to replay a transaction with the exact state that was present when the transaction was originally executed on the rollup. This is one of the complicated parts of optimistic rollups and is usually achieved by creating a separate manager contract that replaces certain function calls with a state from the rollup. 

It’s worth noting that the system can work as expected and detect fraud even if there is only 1 honest party that monitors the state of the rollup and submits fraud proofs if needed. 

It’s also worth mentioning that because of the correct incentives within the rollup system, entering the dispute resolution process should be an exceptional situation and not something that happens all the time. 

When it comes to ZK rollups, there is no dispute resolution at all. This is possible by leveraging a clever piece of cryptography called Zero-Knowledge proofs hence the name ZK rollups. In this model, every batch posted to layer 1 includes a cryptographic proof called a ZK-SNARK. The proof can be quickly verified by the layer 1 contract when the transaction batch is submitted and invalid batches can be rejected straight away. 

Sounds simple right? Maybe on the surface. In practice to make it work, multiple researchers spent countless hours iterating on these clever pieces of cryptography and maths. 

There are a few other differences between optimistic and ZK rollups, so let’s go through them one by one. 

Due to the nature of the dispute resolution process, optimistic rollups have to give enough time to all the network participants to submit the fraud proofs before finalizing a transaction on layer 1. This period is usually quite long to make sure that even in the worst-case scenario, fraudulent transactions can still be disputed. 

This causes the withdrawals from optimistic rollups to be quite long as the users have to wait even as much as a week or two to be able to withdraw their funds back to layer 1. 

Fortunately, there are a few projects that are working to improve this situation by providing fast “liquidity exists”. These projects offer almost instant withdrawals back to layer 1, another layer 2 or even a sidechain and charge a small fee for the convenience. The Hop protocol and Connext are the projects to look at. 

ZK rollups don’t have the problem of long withdrawals as the funds are available for withdrawals as soon as the rollup batch, together with a validity proof, is submitted to layer 1. 

So far it looks like ZK rollups are just a better version of optimistic rollups, but they also come with a few drawbacks. 

Due to the complexity of the technology, it’s much harder to create an EVM-compatible ZK rollup which makes it more difficult to scale general-purpose applications without having to rewrite the application logic. Saying this, ZKSync is making significant progress in this area and they might be able to launch an EVM-compatible ZK rollup quite soon. 

Optimistic rollups have a little bit of an easier time with the EVM compatibility. They still have to run their own version of the EVM with a few modifications, but 99% of contracts can be ported without making any changes. 

ZK rollups are also way more computation-heavy than optimistic rollups. This means that nodes that compute ZK proofs have to be high-spec machines, making it hard for other users to run them. 

When it comes to scaling improvements, both types of rollups should be able to scale Ethereum from around 15 to 45 transactions per second (depending on the transaction type) up to as many as 1000-4000 transactions per second.

It’s worth noting that it is possible to process even more transactions per second by offering more space for the rollup batches on layer 1. This is also why Eth2 can create a massive synergy with rollups as it increases the possible data availability space by creating multiple shards – each one of them able to store a significant amount of data. The combination of Eth2 and rollups could bring Ethereum transaction speed up to as many as 100k transactions per second. 

Now, let’s talk about all the different projects working on both optimistic and ZK rollups.

Optimistic Rollups 

Optimism and Arbitrum are currently the most popular options when it comes to optimistic rollups. 

Optimism has been partially rolled out to the Ethereum mainnet with a limited set of partners such as Synthetix or Uniswap to ensure that the technology works as expected before the full launch. 

Arbitrum already deployed its version to the mainnet and started onboarding different projects into its ecosystem. Instead of allowing only limited partners to be able to deploy their protocols first, they decided to give a window of time for all protocols that want to launch on their rollups. When this period of time is finished, they will open the flood gates to all the users in one go. 

Some of the most notable projects launching on Arbitrum are Uniswap, Sushi, Bancor, Augur, Chainlink, Aave and many more. 

Arbitrum has also recently announced its partnership with Reddit. They’ll be focusing on launching a separate rollup chain that will allow Reddit to scale their reward system. 

Optimism is partnering with MakerDAO to create the Optimism Dai Bridge and enable fast withdrawals of DAI and other tokens back to layer 1. 

Although both Arbitrum and Optimism try to achieve the same goal – building an EVM-compatible optimistic rollups solution – there are a few differences in their design.

Arbitrum has a different dispute resolution model. Instead of rerunning a whole transaction on layer 1 to verify if the fraud proof is valid, they have come up with an interactive multi-round model which allows narrowing down the scope of the dispute and potentially executes only a few instructions on layer 1 to check if a suspicious transaction is valid. 

This also resulted in a nice side effect where smart contracts deployed on Arbitrum can be larger than the maximum allowed contract size on Ethereum. 

Another major difference is the approach to handling transaction ordering and MEV.

Arbitrum will be initially running a sequencer that is responsible for ordering transactions, but they want to decentralize it in the long run.

Optimism prefers another approach where the ordering of transactions, and hence the MEV, can be auctioned off to other parties for a certain period of time.  

It’s also worth mentioning a few other projects working on optimistic rollups. Fuel, the OMG team with OMGX and Cartesi to name a few. Most of them try to also work on an EVM-compatible version of their rollups. 

ZK Rollups

Although it looks like the Ethereum community is mostly focusing on optimistic rollups, at least in the short run, let’s not forget that the projects working on ZK rollups are also progressing extremely quickly. 

With ZK rollups we have a few options available.

Loopring uses ZK rollup technology to scale its exchange and payment protocol. 

Hermez and ZKTube are working on scaling payments using ZK rollups with Hermez also building an EMV-compatible ZK rollup. 

Aztec is focusing on bringing privacy features to their ZK rollup technology. 

StarkWare-based rollups are already extensively used by projects such as DeversiFi, Immutable X and dYdX. 

As we mentioned earlier, ZKSync is working on an EMV-compatible virtual machine that will be able to fully support any arbitrary smart contracts written in Solidity. 

Summary

As we can see, there are a lot of things going on in both the optimistic and the ZK rollup camps and the competition between different rollups will be interesting to watch. 

Rollups should also have a big impact on DeFi. Users who were previously not able to transact on Ethereum due to high transaction fees will be able to stay in the ecosystem the next time the network activity is high. They will also enable a new breed of applications that require cheaper transactions and faster confirmation time. All of this while being fully secured by the Ethereum consensus. It looks like rollups may trigger another high growth period for DeFi. 

There are however a few challenges when it comes to rollups. 

Composability is one of them. In order to compose a transaction that uses multiple protocols, all of them would have to be deployed on the same rollup. 

Another challenge is fractured liquidity. For example, without the new money coming into the Ethereum ecosystem as a whole, the existing liquidity present on layer 1 in protocols such as Uniswap or Aave will be shared between layer 1 and multiple rollup implementations. Lower liquidity usually means higher slippage and worse trade execution. 

This also means that naturally there will be winners and losers. At the moment, the existing Ethereum ecosystem is not big enough to make use of all scaling solutions. This may and probably will change in the long run, but in the short run, we may see some of the rollups, and other scaling solutions, becoming ghost towns. 

In the future, we may also see users living entirely within one rollup ecosystem and not interacting with the main Ethereum chain and other scaling solutions for long periods of time. This could be particularly visible if we’re going to see more centralized exchanges enabling direct deposits and withdrawals to and from rollups. 

Nevertheless, rollups seem like the ultimate strategy for scaling Ethereum and the challenges will be most likely mitigated in one way or another. It will be clearly super interesting to see how rollups gain more and more users’ adoption. 

One question that comes up very often when discussing rollups is if they are a threat to sidechains. Personally, I think that sidechains will still have their place in the Ethereum ecosystem. This is because, although the cost of transactions on Layer 2 will be much lower than on Layer 1, it will most likely still be high enough to price out certain types of applications such as games and other high volume apps. 

This may change when Ethereum introduces sharding, but by then sidechains may create enough network effect to survive long term. It will be interesting to see how this plays out in the future. 

Also, the fees on rollups are higher than on sidechains because each rollup batch still has to pay for the Ethereum block space. 

It’s worth remembering that the Ethereum community puts a huge focus on rollups in the Ethereum scaling strategy – at least in the short to mid-term and potentially even longer. I recommend reading Vitalik Buterin’s post on a rollup-centric Ethereum roadmap.

So what do you think about rollups? What are your favourite rollup technologies? 

If you enjoyed reading this article you can also check out Finematics on Youtube and Twitter.

]]>
https://finematics.com/rollups-explained/feed/ 0
Binance Smart Chain and CeDeFi Explained https://finematics.com/binance-smart-chain-and-cedefi-explained/?utm_source=rss&utm_medium=rss&utm_campaign=binance-smart-chain-and-cedefi-explained&utm_source=rss&utm_medium=rss&utm_campaign=binance-smart-chain-and-cedefi-explained https://finematics.com/binance-smart-chain-and-cedefi-explained/#respond Thu, 04 Mar 2021 00:13:43 +0000 https://finematics.com/?p=1275

So what is Binance Smart Chain? How is it different from Ethereum? And what is CeDeFi all about? You’ll find answers to these questions in this article. 

First, let’s see how Binance Smart Chain came into existence. 

Binance Chain 

In April 2018, Binance – one of the biggest cryptocurrency exchanges, decided to launch their own blockchain – Binance Chain. 

The main idea behind Binance Chain was to create a high-speed blockchain able to support large transaction throughput.

To achieve this, the team behind Binance Chain chose the Tendermint consensus model with instant finality and instead of supporting multiple applications, decided to focus on its primary app – Binance DEX.

With DeFi on Ethereum flourishing and Binance DEX not getting as much traction as expected, Binance very quickly realised that the main feature missing from Binance Chain was the ability to run smart contracts and allowing other teams to deploy their own applications. 

At this point, Binance made an interesting decision. Instead of trying to add smart contract capabilities to Binance Chain and sacrificing its performance, they decided to launch another chain in parallel to Binance Chain and this is where Binance Smart Chain comes into play. 

Binance Smart Chain 

Binance Smart Chain launched in September 2020 and in contrast with Binance Chain, was fully programmable and supported smart contracts out of the box. 

If you’d like to better understand what smart contracts are and why they are so important you can check this article here

Creating a completely new smart contract platform from scratch requires years of work and research. Instead of doing that, Binance decided to leverage users’ and developers’ familiarity with Ethereum and forked Ethereum’s go client – geth. 

Of course, forking Ethereum without making any changes wouldn’t make much sense, so Binance decided to optimise the new chain for low fees and higher transaction throughput by sacrificing decentralization and censorship-resistance properties of the network. 

This was achieved by replacing Ethereum’s Proof-of-Work consensus model with the Proof-of-Staked-Authority model and tweaking a few other parameters such as the block time and the gas limit per block. 

Before we jump into the details of Binance Smart Chain, let’s see why some properties of the network had to be sacrificed in the first place. We can understand this better by revisiting the famous Scalability Trilemma. 

Scalability Trilemma 

The Scalability Trilemma is a useful model, introduced by Vitalik Buterin, that helps with visualising what trade-offs have to be made when it comes to different blockchain architectures. 

Each blockchain has 3 core properties: security, scalability and decentralization that cannot be achieved simultaneously. So in order to significantly improve one of these properties the other ones have to be sacrificed.

Sharding is an attempt at solving this challenge at the base layer by splitting a blockchain into multiple smaller chains – “shards”. Sharding is one of the scaling approaches chosen by Ethereum and it’s one of the elements of the Eth2 upgrade. 

Unfortunately, sharding by itself cannot fully solve the trilemma and even sharded blockchains wouldn’t be able to process hundreds of thousands or even millions of transactions per second without sacrificing decentralization and security.  

This is also why the Ethereum community decided to use Layer 2 solutions that can dramatically increase the scalability of a blockchain without sacrificing other properties. 

It shouldn’t come as a big surprise that there were a lot of other projects popping up that, despite The Scalability Trilemma, decided to scale up by sacrificing the other 2 properties. One of the most notable examples was EOS.

This is also the approach that Binance Smart Chain decided to go with. 

Architecture 

Binance Smart Chain, instead of using a Proof-of-Work (PoW) or a Proof-of-Stake (PoS) consensus mechanism, uses a Proof-Of-Staked-Authority (PoSA) model. 

In this model, all transactions are validated by a set of nodes called validators. A validator can be either active or inactive. The number of active validators is limited to 21 and only active validators are eligible to validate transactions. 

Active validators are determined by ranking all validators based on the amount of BNB tokens they hold. The top 21 validators with the highest amount of BNB become active and take turns validating blocks. This is determined once per day and the set of all validators is stored separately on Binance Chain.

Besides staking BNB tokens themselves, validators can also encourage BNB holders to delegate their BNB tokens to them in order to receive a share of the validator’s transaction fees. 

On this note, all transaction fees on Binance Smart Chain are paid in BNB which is the native token of the chain, in a similar way to how ETH is native to the Ethereum blockchain. 

In contrast to Ethereum and Bitcoin, there are no block subsidy rewards on Binance Smart Chain. This means that the validators only receive the transaction fees paid in BNB and there is no other fixed reward per block. 

Although the PoSA consensus model allows for achieving a short block time and lower fees, it does so at a cost of decentralization and security of the network. 

First of all, a user cannot just start validating the state of the blockchain in a similar way as they can do it in Bitcoin or Ethereum. 

On top of this, even if a user could just join the network in a permissionless way and start validating transactions, they wouldn’t be able to do it for a very long time on consumer-grade hardware as the state on Binance Smart Chain grows at a much higher rate than the Ethereum’s state.

Now, let’s see how the PoSA-based model allowed the Binance Smart Chain team to change the block time and the block gas limit.

The block time was reduced from around 13s on Ethereum to around 3s on Binance Smart Chain. This allows for higher transaction throughput and faster confirmation time, at a cost of having to store more data. 

If implemented on Ethereum, it would also increase the number of orphaned blocks as there would not be enough time to propagate valid blocks across the network from multiple different geographic locations.

When it comes to Binance Smart Chain, however, this is not a problem as validators just take turns validating blocks. 

Block gas limit is another important parameter that we discussed in our article about the gas fees. This parameter basically decides how many transactions can fit into one single block. On Ethereum, miners have to come to a consensus and decide what value they want to set it to. 

Increasing the block gas limit, similarly to reducing the block time, increases the amount of data produced by the blockchain which makes it harder for individual users to run their own nodes. 

Again, this is not a problem on Binance Smart Chain as the 21 validators can just run their nodes on institutional-grade hardware when the state of the blockchain grows beyond what can be handled by consumer-grade hardware. 

At the time of writing this article, the gas limit per block is set to 12.5M gas on Ethereum and 30M on Binance Smart Chain. 

By knowing both the block time and the gas limit per block we can quickly calculate that the amount of data on Binance Smart Chain increases roughly at a 10-times faster rate than the state on the Ethereum blockchain. 

Currently, with an average block size of 40,000 bytes, Binance Smart Chain grows by around 1.15 GB per day which is around 420 GB per year. After a couple of years, this of course eliminates most of the consumer-grade hardware. 

Now as we understand a bit more about the Binance Smart Chain architecture, let’s see what CeDeFi is all about. 

CeDeFi

As we know, DeFi stands for decentralized finance. CeFi is the opposite of DeFi and as we can probably guess stands for centralized finance. CeDeFi is a term coined by the CEO of Binance that basically describes a mixed solution between centralized and decentralized finance which Binance Smart Chain is a good example of. 

So what are the benefits of such a solution? 

CeDeFi allows users to get a feel for using DeFi without paying high transaction fees. Low fees encourage users to play with multiple different DeFi protocols such as decentralized exchanges, lending protocols, liquidity aggregators, yield farming tools and others.

On top of this, CeDeFi makes users familiar with common DeFi tools like Metamask and block explorers. 

It also allows new teams to deploy their smart contracts for a fraction of a cost when compared to what they would have to pay on the Ethereum blockchain. This way they can easily test and get feedback on their projects. Testing within an ecosystem with actual economic incentives usually works much better than just testing on a testnet. 

Binance Smart Chain and CeDeFi have recently started gaining a lot of popularity. This is mainly driven by the high transaction cost on Ethereum that priced out some of the users.

As we know, Binance Smart Chain is a fork of Ethereum and therefore allows for running exactly the same smart contracts like the ones on Ethereum. 

This allowed the network to quickly bootstrap its ecosystem by essentially either reusing or forking all popular Ethereum services and applications.

Users can connect to Binance Smart Chain based dApps by switching their network in Metamask. They can look up their transactions on bscscan.com which is pretty much a copy of etherscan.com. They can trade on Pancakeswap – a fork of Uniswap. They can lend and borrow on Venus – a fork of Compound and yield farm via Autofarm – a protocol that resembles Yearn Finance. 

Binance Smart Chain, similarly to Ethereum, also allows for creating new tokens using their BEP-20 standard – Ethereum’s ERC-20 counterpart.

Some Ethereum-based projects also quickly saw the opportunity for expanding their reach to Binance Smart Chain, at a minimal cost. 1Inch – a liquidity aggregator – has recently decided to also launch on Binance Smart Chain.

Summary

It’s clearly visible that Binance Smart Chain was able to make quite a lot of traction and attract a decent number of users and trading volume in a very short amount of time. 

A decision to fork Ethereum and allow users and developers to interact with DeFi tools and protocols they are already familiar with was quite clever. 

The timing was also extremely good. The popularity of Ethereum combined with most Ethereum scaling solutions still in progress and a roaring bull market resulted in high transaction fees that priced out smaller users and forced them to find a different option if they still wanted to participate in DeFi.

On top of this, Binance was able to leverage its position as one of the top cryptocurrency exchanges and make it easy for its millions of users to easily withdraw BNB and other tokens directly to Binance Smart Chain. 

The main question to ask here is if this is a short term growth caused only by high transaction fees on Ethereum or a longer-term user acquisition? 

At this point, it’s hard to say, but two main things pointing at the former are Ethereum’s layer 2 scaling solutions and the Eth2 scaling roadmap. 

Both of these can dramatically reduce the transaction fees on Ethereum without sacrificing other properties like security and decentralization. 

We can already get a feel for it with Matic (a.k.a. Polygon) and Loopring attracting more and more users and trading volume. This trend should only keep escalating with other layer 2 solutions getting more traction and new ones like Optimism fully launching in a matter of weeks. 

With millions of new users entering the cryptocurrency space, it’s also extremely important to make sure they are aware of the differences between DeFi and CeDeFi and are able to make their own decisions. 

At the end of the day, we have to ask ourselves the question. What’s the main point of using a blockchain if it’s not fully decentralized and permissionless? Auditability? Maybe, but is this really the main value proposition of the whole cryptocurrency space?

It will clearly be interesting to see how DeFi and CeDeFi play out. 

So what do you think about Binance Smart Chain? Does CeDeFi have a future? 

If you enjoyed reading this article you can also check out Finematics on Youtube and Twitter.

]]>
https://finematics.com/binance-smart-chain-and-cedefi-explained/feed/ 0
The Graph – Google Of Blockchains? https://finematics.com/the-graph-explained/?utm_source=rss&utm_medium=rss&utm_campaign=the-graph-explained&utm_source=rss&utm_medium=rss&utm_campaign=the-graph-explained https://finematics.com/the-graph-explained/#respond Wed, 13 Jan 2021 17:12:49 +0000 https://finematics.com/?p=1206

So what is The Graph Protocol all about? Why do some people call it the Google of Blockchains? And what is the use case for the GRT token? You’ll find answers to these questions in this article. 

Let’s start with what The Graph actually is. 

Introduction

The Graph is an indexing protocol for querying blockchain data that enables the creation of fully decentralized applications. 

The project was started in late 2017 by a trio of software engineers who were frustrated by the lack of tooling in the Ethereum ecosystem which made building decentralized applications hard. After a few years of work and a lot of iterations, The Graph went live in Dec 2020.

The Graph, as one of the infrastructure protocols, can be quite tricky to grasp, so before we jump into the details, let’s try to understand what indexing – the main concept behind The Graph – actually is.

Indexing

Indexing, in essence, allows for reducing the time required to find a particular piece of information. A real-life example is an index in a book. Instead of going through the whole book page by page to find a concept we’re looking for, we can find it much quicker in the index, which is sorted alphabetically and it contains a reference to the actual page in a book. 

Similarly, in computer science, database indexes are used to achieve the same goal – cutting the search time. Instead of scanning the whole database table multiple times to provide data to an SQL query – indexes can dramatically speed up queries by providing quick access to relevant rows in a table. 

When it comes to blockchains such as Ethereum, indexing is super important. To understand why this is the case, let’s see how a typical blockchain is built. 

A typical blockchain consists of blocks that contain transactions. Blocks are connected to their adjacent blocks and provide a linear immutable history of what happened on the blockchain to date. 

Because of this design, a naive approach for searching for a particular piece of data, such as a transaction, would be to start with Block 1 and search for a transaction across all transactions in that block. If the data is not found we move to Block 2 and continue our search. 

As you can imagine this process would be highly inefficient. This is also why every popular blockchain explorer, such as Etherscan, built their own service for reading all the data on the blockchain and storing it in a database in a way that allows for quick retrieval of data.

These kinds of services are very often called ingestion services as they basically consume all the data and transform it into a queriable format. 

Although this approach usually works fine, it requires trusting the company that provides the data – this is not ideal for building fully decentralized and permissionless applications. 

On top of that, all private crypto companies that don’t want to trust other APIs have to build their own ingestion service which creates a lot of redundant work. 

This is also why a decentralized query protocol for blockchains was needed and this is where The Graph comes into play. 

The Graph 

The Graph aims at becoming one of the main core infrastructure projects necessary for building fully decentralized applications. It focuses on decentralizing the query and API layer of decentralized web (Web3) by removing a tradeoff that dApp developers have to make today: whether to build an app that is performant or truly decentralized.

The protocol allows for querying different networks such as Ethereum or IPFS by using a query language – GraphQL. GraphQL allows for specifying which fields we’re interested in and what search criteria we would like to apply. 

Queriable data is organised in the form of subgraphs. One decentralized application can make use of one or multiple subgraphs. One subgraph can also consist of other subgraphs and provide a consolidated view of data that the application may be interested in. 

The Graph provides an explorer that makes it easy to find subgraphs of the most popular protocols such as Uniswap, Compound, Balancer or ENS. 

Uniswap subgraph provides access to a lot of useful data, for example, the total volume across all trading pairs since the protocol was launched, volume data per trading pair and data about particular tokens or transactions. 

Now, let’s jump into the architecture of The Graph Protocol. 

The Graph Architecture

The easiest way to explain this is to focus on different network participants first. 

Let’s start with Indexers. 

Indexers are the node operators of The Graph. They can join the network by staking the GRT tokens and running a Graph node. Their main function is to index relevant subgraphs. Indexers earn rewards for indexing subgraphs and fees for serving queries on those subgraphs. They also set prices for their services. To keep prices in check each Indexer competes with other Indexers, on top of ensuring the highest quality of their data. This basically creates a marketplace for the services provided by Indexers. 

Consumers query Indexers and pay them for providing data from different subgraphs. Consumers can be either end-users, other web services or middleware.

Curators are other important network participants. They use their GRT tokens to signal what subgraphs are worth indexing. Curators can be either developers that want to make sure their subgraph is indexed by Indexers or end-users that find a particular subgraph valuable and worth indexing. Curators are financially incentivised as they receive rewards that are proportional to how popular a particular subgraph becomes. 

Delegators are yet another network participant. They stake their GRT on behalf of Indexers in order to earn a portion of Indexers’ rewards and fees. Delegators don’t have to run a Graph Node. 

Last but not least are Fishermen and Arbitrators. They become useful in case of a dispute that can happen, for example, when an Indexer provides incorrect data to the Consumer. 

Now, let’s see how the network participants cooperate in order to create a trustless and decentralized system. 

Let’s say a new decentralized exchange has launched and the team behind the project wants to allow other applications for easy access to the exchange’s historical volume and other data points.

To encourage Indexers to index the new subgraph, a Curator has to step in and signal that the new subgraph is worth indexing. 

Here we have 2 options. If the new exchange was a highly anticipated project with a lot of potential, an already existing Curator would most likely step in and use their GRT tokens to signal the usefulness of the new subgraph. If the subgraph becomes popular, the curator would financially benefit from their signalling. In the case that the new exchange is not highly anticipated, the developers behind the project can become Curators themselves and use their GRT to encourage Indexers. 

Once this happens, the Indexers can step in and start indexing the subgraph. This process can take a few hours or even a few days depending on how much data has to be indexed. 

Once indexing is completed, the Consumers can start querying the subgraph. Each query issued by the consumers requires payment in GRT that is handled by the query engine. The query engine also acts as a trading engine, making decisions such as which Indexers to do business with. 

To make this process smoother, The Graph uses payment channels between the Consumer and the Indexer. If the Indexer provides incorrect results a dispute process can be initiated. 

If you’d like to dive deeper into the architecture behind The Graph protocol, you can check this link here.

Now, time to discuss the GRT token. 

The GRT Token

GRT is a utility token that plays an important role in The Graph Network design. As we mentioned earlier GRT is used by Curators to signal subgraphs that are worth indexing. On top of this, it’s staked by Indexers to keep their incentives in check. Besides that, people who own GRT tokens, but don’t want to be Indexers and run the GRT node, can become Delegators and earn a portion of Indexers reward. And also, Consumers pay for their queries in GRT.

The Graph had an initial supply of 10 billion GRT tokens and new token issuance at 3% annually that is used for paying the indexing rewards.

There is also a token burning mechanism that is expected to start at ~1% of total protocol query fees. 

The Graph protocol had a huge interest from VCs, with plenty of big names including Coinbase Ventures participating in their initial offering. 

Future

The Graph core team aims at decentralizing the protocol further by launching on-chain governance – The Graph Council – in the future.

The protocol that is currently deployed to Ethereum mainnet only supports indexing Ethereum, but multi-blockchain support is one of the areas for further research. 

The Graph is already used by other popular projects such as Uniswap, Synthetix, Decentraland and Aragon. 

It looks like The Graph could be one of the missing puzzles in the effort of increasing the decentralization of dApps. 

Some people went as far as calling The Graph the Google Of Blockchains, pointing at similarities between indexing websites by Google and indexing blockchains and decentralized applications by The Graph. 

If this analogy is correct, and The Graph indeed becomes a go-to protocol for indexing web3, it has a lot of potential to grow. 

So what do you think about The Graph? Will it become a core piece of infrastructure in the decentralized world? 

If you enjoyed reading this article you can also check out Finematics on Youtube and Twitter.

]]>
https://finematics.com/the-graph-explained/feed/ 0
Can ETH Become Deflationary? EIP-1559 Explained https://finematics.com/ethereum-eip-1559-explained/?utm_source=rss&utm_medium=rss&utm_campaign=ethereum-eip-1559-explained&utm_source=rss&utm_medium=rss&utm_campaign=ethereum-eip-1559-explained https://finematics.com/ethereum-eip-1559-explained/#respond Fri, 06 Nov 2020 14:16:19 +0000 https://finematics.com/?p=1072

So what is EIP-1559 a.k.a the fee burn proposal all about? Will it lower Ethereum’s gas fees? And how can it make ETH deflationary? We’ll answer all of these questions in this article. 

EIPs 

Let’s start with what EIPs actually are. 

EIP stands for Ethereum Improvement Proposal and it is a common way of requesting changes to the Ethereum Network inspired by Bitcoin Improvement Proposals (BIPs). An EIP is a design document covering technical specifications of the proposed change and rationale behind it. 

The majority of EIPs focus on improving technical details of Ethereum and they are not widely discussed outside of the core Ethereum developers community. 

EIP-1559 is one of the exceptions. This is because the proposal has some big implications when it comes to the ETH monetary policy and client applications such as wallets. 

Ethereum Fee Model

EIP 1559 describes changes to the Ethereum fee model and it was put forward by Vitalik Buterin in 2019. 

To understand why we need this proposal in the first place, let’s quickly review how the current Ethereum fee model works. 

The current fee model is based on a simple auction mechanism also known as a first price auction. The users who want to have their transaction picked up by a miner have to essentially bid for their space in a block. 

This is done by submitting a gas price that they are willing to pay for a particular transaction.  

The miners are incentivised to pick up transactions by sorting them by the highest gas price and including the most profitable ones first. 

This can be quite inefficient and usually results in users overpaying for their transactions. 

The model is also quite problematic when it comes to the wallets. Metamask, for example, allows the users to adjust their fee by choosing between slow, average and fast confirmation time or by specifying a gas price manually. 

Less sophisticated users who are unlucky enough to submit their transaction with a default fee just before a spike in gas fees may end up waiting for their transaction to be confirmed for a long period of time. This is of course not ideal from the user experience point of view.  

This is also where EIP 1559 comes into play. The proposal was made to accommodate these problems and it aims to achieve the following goals. 

  1. Making transaction fees more predictable 
  2. Reducing delays in transaction confirmation
  3. Improving user experience by automating the fee bidding system 
  4. Creating a positive feedback loop between network activity and the ETH supply 

Now, let’s see what the proposed change is all about. 

EIP 1559

EIP 1559 introduces a new concept of a base fee (BASEFEE). 

The base fee represents the minimum fee that has to be paid by a transaction to be included in a block. The base fee is set per block and it can be adjusted up or down depending on how congested the Ethereum network is. 

The next big part of EIP 1559 is an increase in the network capacity, achieved by changing the max gas limit per block from 12.5M to 25M gas, basically doubling the block size.

With the base fee and increased network capacity, EIP 1559 can build the following logic: 

  • When the network is at > 50% utilization, the base fee is incremented 
  • When the network is at < 50% utilization, the base fee is decremented 

This basically means that the network aims at achieving equilibrium at 50% capacity by adjusting fees accordingly to the network utilization.

EIP 1559 also introduces a miner tip – a separate fee that can be paid directly to the miner to incentive them to prioritise a transaction. 

This is very similar to the current mechanism where the miners can be incentivised by higher gas fees. This feature is really important for transactions that take advantage of quick confirmation such as arbitrage transactions.

Now, let’s go through a quick example to see how the EIP 1559 fee model compares to the existing model during a period of high network activity. 

Let’s start with the current fee model.

Imagine the min gas fee to be included in a previous block was 50 gwei. The network activity seems to remain the same so users start submitting their transactions with 50 gwei trying to be included in the next block. At the same time, a new highly anticipated token is launched causing users who want to buy it to dramatically increase their bids. Now, to be included in the next block the min required fee is 100 gwei. If the network activity remains high for multiple subsequent blocks, the users who already submitted their transactions with 50 gwei may wait for their confirmations for a long period of time. 

In this case, the block size is capped at 12.5M gas and the only way to get into a block is to bid higher than the other users. 

Let’s go through the same scenario, this time with EIP 1559 in place. 

In the previous block, the 50 gwei was the base fee and the network utilization was at 50% with most blocks using 12.5M gas – half of the max gas limit. 

The spike caused by the release of the new token results in users submitting their transactions with a higher miner tip. 

Seeing the high demand for the block space and a lot of transactions with high miner tips, the miners produce a block that is at the max cap limit of 25M gas. This results in more transactions being included in a block, but it also causes the base fee to be increased in the following block as the current block is 100% full (>50% network utilization). 

If the network activity and demand for block space remain high, the miners would keep producing full blocks,  increasing the base fee with each subsequent block. At some point, the fee would become high enough to drive off some of the users, causing the network to start coming back to below 50% network utilization and lowering the fees in the subsequent blocks.

The base fee can increase or decrease by a maximum of 12.5% per block, so it would take roughly 20 blocks (5 minutes) for gas prices to 10x and 40 blocks to 100x. In our example, the second block would have a base fee of 56.25 gwei. 

This example demonstrates how spikes in network fees can be smoothed out when EIP 1559 is implemented. Another way of thinking about this model is to imagine that it basically swaps high volatility in the fee prices for volatility in the block size. 

Because the increments and decrements are constrained, the difference in the base fee from block to block can be easily calculated. 

This allows wallets to automatically set the base fee based on the information from the previous blocks. 

To avoid a situation where miners can collude and artificially inflate the base fee for their own benefit, the entire base fee is burnt. 

Let’s repeat this –  the base fee is always entirely burnt, the “miner tip” is always entirely received by the miner. 

There is also one more important new concept, known as a FEECAP. This can be set by users who would like to limit how much they want to pay for a particular transaction, instead of just paying the current base fee. A transaction with a FEECAP that is lower than the current basefee would have to wait until the base fee is lower than the max fee set in FEECAP to be included in a block.

The fee changes are also backward compatible. The legacy Ethereum transaction would still work under the new fee system although they would not benefit directly from the new pricing model. 

Implications

Changes proposed in EIP 1559 have a lot of implications, some of them quite severe. 

Less profit for the miners. The miners in the current fee system receive both the block subsidy reward and the entire gas fee. With the recent high gas prices caused by DeFi,  miners were able to collect more money from the fees than the actual block rewards even though historically block rewards were always much bigger than the extra fees collected from transactions.

After the changes in EIP 1559 are implemented, the miners would only receive the block reward + the miner tip. This is also why most miners are quite reluctant when it comes to implementing the proposal, suggesting to push the change to ETH 2.0. 

Another major implication is the change required by the wallets. With EIP 1559 in place, wallets don’t have to estimate the gas fees anymore. They can just set the base fee automatically based on the information available in the previous block. This should simplify wallets’ user interfaces. 

The base fee burning also has major implications when it comes to the ETH supply. This is also why EIP 1559 is very often discussed by ETH investors. 

Burning the base fee creates an interesting feedback loop between the network usage and the ETH supply. More network activity = more ETH burnt = less ETH available to be sold on the market by miners, making the already existing ETH more valuable. 

Burning the base fee basically rewards the users of the network by making their ETH more scarce instead of overpaying miners. 

The fee burning mechanism also sparked a few discussions about ETH becoming deflationary. This would be possible if the block reward is lower than the base fee burnt. That would be the case, for example, during the recent DeFi gas fee craze where the network was constantly under heavy utilization. 

One potential drawback when it comes to burning the base fee is the fact of losing control over the long term monetary policy of ETH. With this change, ETH would end up being sometimes inflationary and sometimes deflationary. This doesn’t look like a major problem as the max inflation would be capped at around 0.5-2% per year anyway.  

So will EIP 1559 make gas fees much lower? 

Not really, it will clearly optimise the fee model by smoothing fee spikes and limiting the number of overpaid transactions, but the main ways of lowering gas fees are still ETH 2.0 and Layer 2 scaling solutions. 

When EIP 1559?

It looks like EIP 1559 would be a great change to the Ethereum fee system. This also seems to be the consensus within the Ethereum community with the majority of people rooting for the change to be implemented.  

There are still a few challenges to overcome, especially when it comes to making sure that miners can safely process bigger blocks without making the whole network more prone to Denial of Service attacks. 

EIP 1559 belongs to the Core category of EIPs which means that the change affects the Ethereum consensus and requires all the clients to upgrade at the same time (hard fork). 

From the timeline perspective, it looks like EIP 1559 could be implemented in the next hard fork after the Berlin hard fork which is somewhere in 2021. 

The team leading the charge received funding from the Ethereum Foundation and from the EIP 1559 Gitcoin grant. Most of the coordination work is done by Tim Beiko. 

Depending on the timeline, EIP 1559 can be either implemented in both Ethereum 1.0 and 2.0 or potentially only in Ethereum 2.0 if there are some delays in place. 

So what do you think about EIP-1559? Will it have any impact on the ETH price?

If you enjoyed reading this article you can also check out Finematics on Youtube and Twitter.

]]>
https://finematics.com/ethereum-eip-1559-explained/feed/ 0
Ethereum Layer 2 Scaling Explained https://finematics.com/ethereum-layer-2-scaling-explained/?utm_source=rss&utm_medium=rss&utm_campaign=ethereum-layer-2-scaling-explained&utm_source=rss&utm_medium=rss&utm_campaign=ethereum-layer-2-scaling-explained https://finematics.com/ethereum-layer-2-scaling-explained/#respond Tue, 27 Oct 2020 19:09:12 +0000 https://finematics.com/?p=1057

So what is Ethereum Layer 2 scaling all about? And what is the difference between projects such as Optimism, xDai, OMG and Loopring? We’ll answer all of these questions in this article. 

Need For Scaling

Ethereum scaling has been one of the most discussed topics pretty much since the time when the network launched. The scaling debate always heats up after a period of major network congestion. 

One of the first periods like this was the 2017 crypto bull market where infamous CryptoKitties, together with ICOs, were able to clog up the entire Ethereum network causing a major spike in the gas fees. 

This year the network congestion came back even stronger, this time caused by the popularity of DeFi and yield farming. There were periods of time when even gas fees as high as 500+ gwei would not get your transaction verified for a while. 

When it comes to scaling Ethereum or blockchains in general, there are 2 major ways of doing it: scaling the base layer itself (Layer 1) or scaling the network by offloading some of the work to another layer – Layer 2. 

Layers 1 vs Layer 2 Scaling 

Layer 1 is our standard base consensus layer where pretty much all transactions are currently settled. The concept of layers is not an Ethereum-specific concept. Other blockchains such as Bitcon or Zcash also use it extensively.   

Layer 2 is another layer built on top of Layer 1. There are a few important points here. Layer 2 doesn’t require any changes in Layer 1, it can be just built on top of Layer 1 using its existing elements such as smart contracts. Layer 2 also leverages the security of Layer 1 by anchoring its state into Layer 1.

Ethereum can currently process around 15 transactions per second on its base layer (Layer 1). Layer 2 scaling can dramatically increase the number of transactions. Depending on the solution, we’re talking about processing between 2000-4000 tx/second. 

How about Ethereum 2.0? Wasn’t that supposed to scale Ethereum? 

Yes. Ethereum 2.0 introduces Proof of Stake and sharding that will dramatically increase the transaction throughput on the base layer. 

Does it mean we don’t need Layer 2 scaling when Ethereum 2.0 ships? 

Not really, even with sharding Ethereum will still need Layer 2 scaling to be able to handle hundreds of thousands or even millions of tx per second in the future. 

This is also where the famous Scalability Trilemma comes into play. In theory, we could just skip Layer 2 entirely and focus on scaling the base layer instead. This would require highly specialized nodes to handle the increased workload which would lead to higher centralization and, therefore, lowering security and censorship-resistant properties of the network. 

Sticking to the fact that scalability should never come at the expense of security and decentralization, we are left with a combination of Layer 1 and Layer 2 scaling going forward into the future. 

Layer 2 Scaling Solutions

Layer 2 scaling is a collective term for solutions that help with increasing the capabilities of Layer 1 by handling transactions off-chain (off Layer 1). The 2 main capabilities that can be improved are transaction speed and transaction throughput. On top of that, Layer 2 solutions can greatly reduce the gas fees. 

When it comes to actual scaling solutions there are multiple options available. Whilst some of the options are available right now and can increase Ethereum network throughput in the near to medium-term, others are aiming for a medium to long-term time horizon.  

Some of the scaling solutions are application-specific, for example, payment channels. Others, such as optimistic rollups, can be used for any arbitrary contract executions.

To understand these differences better let’s explore the most popular Layer 2 scaling solutions.

Channels

Channels are one of the first widely discussed scaling solutions. They allow participants to exchange their transactions off-chain a number of times while only submitting two transactions to the base layer.  

The most popular types of channels are state channels and their subtype – payment channels.

Although channels have the potential to easily process thousands of transactions per second, they come with a few downsides. They don’t offer open participation – participants have to be known upfront and users have to lock up their funds in a multisig contract. On top of that, this scaling solution is application-specific and cannot be used to scale general-purpose smart contracts. 

The main project that leverages the power of state channels on Ethereum is Raiden. The concept of payment channels is also extensively used by Bitcoin’s Lightning Network. 

Plasma

Plasma is a Layer 2 scaling solution that was originally proposed by Joseph Poon and Vitalik Buterin. It’s a framework for building scalable applications on Ethereum.

Plasma leverages the use of smart contracts and Merkle trees to enable the creation of an unlimited number of child chains – copies of the parent Ethereum blockchain. 

Offloading transactions from the main chain into child chains allows for fast and cheap transactions. One of the drawbacks of Plasma is a long waiting period for users who want to withdraw their funds from Layer 2. Plasma, similarly to channels, cannot be used to scale general-purpose smart contracts.

The OMG Network is built on their own implementation of Plasma, called MoreViable Plasma. Matic Network is another example of a platform using an adapted version of the Plasma framework. 

Sidechains 

Sidechains are Ethereum-compatible, independent blockchains with their own consensus models and block parameters.

Interoperability with Ethereum is made possible by using the same Ethereum Virtual Machine, so contracts deployed to the Ethereum base layer can be directly deployed to the sidechain. xDai is an example of such a sidechain. 

Rollups 

Rollups provide scaling by bundling or “rolling up” sidechain transactions into a single transaction and generating a cryptographic proof, also known as a SNARK (succinct non-interactive argument of knowledge). Only this proof is submitted to the base layer. 

With rollups, all transaction state and execution are handled in sidechains. The main Ethereum chain only stores transaction data. 

There are 2 types of rollups: Zk rollups and optimistic rollups. 

Zk rollups, although faster and more efficient than optimistic rollups, do not provide an easy way for the existing smart contracts to migrate to Layer 2. 

Optimistic rollups run an EVM compatible Virtual Machine called OVM (Optimistic Virtual Machine) which allows for executing the same smart contracts as can be executed on Ethereum. This is really important as it makes it easier for the existing smart contracts to maintain their composability, which is extremely relevant in DeFi where all major smart contracts were already battle-tested.  

One of the main projects working on optimistic rollups is Optimism, which is getting closer and closer to their mainnet launch. 

When it comes to Zk rollups, Loopring and Deversifi are good examples of decentralized exchanges built on Layer 2. On top of that we have ZkSync enabling scalable crypto payments.

Rollups’s scalability can also be magnified by Ethereum 2.0. In fact, because rollups only need the data layer to be scaled, they can get a tremendous boost already in Ethereum 2.0 Phase 1 which is about the sharding of data.

Summary

Despite a spectrum of Layer 2 scaling solutions available, it looks like the Ethereum community is converging on the approach of mainly scaling through rollups and Ethereum 2.0 Phase 1 data sharding. 

This approach was also confirmed in a recent post by Vitalik Buterin called “A rollup centric Ethereum roadmap” that I will link in the description box below. 

In our future articles, we’ll explore the base layer scaling with Ethereum 2.0 and how both layer 1 and layer 2 scaling can help with making decentralized finance more accessible to everyone. Stay tuned by subscribing to the channel. 

So what do you think about Ethereum’s approach to scaling? And which scaling solution would you like to learn more about?

If you enjoyed reading this article you can also check out Finematics on Youtube and Twitter.

]]>
https://finematics.com/ethereum-layer-2-scaling-explained/feed/ 0