Introducing Mosaic: Tackling Cross-Layer 2 Liquidity Provisioning through Delivering a New Means of Value Generation

0xbrainjar
9 min readSep 1, 2021

--

In this article, we present one of the main features of our cross-layer transferral system, Mosaic, and take a deep-dive into the steps it takes in ensuring appropriate liquidity across layer 2 (L2) protocols via an L1 farming vault. Through this novel process, we can provide a lucrative liquidity provisioning (LPing) opportunity, as indicated through our simulations. Looking forward, we will shortly host a Proof of Concept bridging Arbitrum, Polygon, and L1, enabling us to identify the optimal approach to the pressing cross-layer liquidity question.

Composable is eager to begin leveraging Mosaic to tackle a significant industry-wide challenge: creating an optimized method of liquidity provisioning (LPing) for cross-layer transfers. We facilitate this novel earnings opportunity with our layer 1 (L1) farming vault, which acts to facilitate liquidity provisioning across different layer 2 (L2) platforms — beginning with our Proof of Concept (PoC) linking Polygon, Arbitrum, and the Ethereum mainnet.

The immediate vision is for Mosaic to provide cross-L2 liquidity services to projects in this space.

Ultimately, our vision is for us to build a liquidity directing system to move liquidity as necessary around the L2 space. We aim to offer these services to be leveraged by other projects in DeFi, mutually benefitting Composable, projects on L2s, other L2-L2 solutions, and DeFi users by ensuring a seamless and symbiotic space free of liquidity limitations.

Little is known about the liquidity requirements for cross-L2 transfers, yet liquidity has already been a limiting factor on L2s.

There have been many proposals that aim to identify the liquidity requirements for facilitating cross-layer transfers. However, nothing has been proven effective or accurate, and there is limited data available on this topic. Yet, we must understand these liquidity requirements to ensure the uninterrupted flow of assets between L2s; every L2 has encountered bridge liquidity issues that cause delays and increased transaction costs. Thus, this a pressing problem, particularly for tech stacks that seek to drive L2 scalability such as Composable.

Chain Swap and other protocols have aimed to resolve issues of liquidity availability cross-chain by matching unmet transfers with individuals that are paid to facilitate them. However, this solution is inefficient, and isn’t automatic. In other words, users must wait for other users to retroactively meet their transfers, and without automation involved, this process can cause significant delays for certain transactions. A self-contained and automated solution is thus necessary to facilitate cross-layer liquidity, instead ensuring the required liquidity is proactively present in any given L2-L2 transfer situation.

Mosaic involves providing sufficient liquidity for cross-L2 transfers via an L1 farming vault — creating an entirely new yield-bearing opportunity in the process.

To deeply understand the liquidity requirements for L2-L2 transfers, we have built a Liquidity Simulation Environment (LSE). This tool will help us design an Intelligent Liquidity Engine (ILE) to ensure cross-layer transfers are met. We will accomplish this through the following steps:

In Phase I, we will leverage our LSE’s initial results based on a series of simulated trades between two L2 platforms, Polygon and Arbitrum, along with Ethereum layer 1 (L1, or mainnet). In Phase II, we will incorporate actual cross-L2 trade data and observations, which we will obtain from our upcoming cross-layer transferral system (i.e. the ILE). In Phase III, we will begin to introduce later solutions for cross-L2 liquidity, as informed by the prior stages; more information on these solutions will be announced as they are constructed, and as they incorporate additional L2s.

Composable’s Proof of Concept is to be tested through an L2-L2 bridge connecting Polygon (L2) and Arbitrum (L2) along with the Ethereum mainnet (L1).

In Phase I, our LSE seeks to maximize liquidity across various L2 vaults via our L1 farming vault, while under the constraint of limited overall capital. Within this objective, the primary metric our LSE seeks to optimize is LP fees; we aim to create a solution to providing cross-layer liquidity that also maximally rewards the LPs delivering this vital service. There are presently no options for providing liquidity on L2-L2 transfers in a scalable manner, and we want to develop this opportunity to be as valuable as possible. In addition to providing LPs with an associated annual percentage yield (APY), we can also use platform fees to buy back our LAYR token and distribute it to LPs as an additional reward. For further improvement of the model, we will also have it result in automated rebalancing.

When we incorporate real data obtained from our cross-layer transferral system and ILE in Phase II, we aim to determine the optimal liquidity distribution.

Phase I is complete, providing us with evidence for the incredible potential value in liquidity providing for cross-L2 transfers.

Goals:

  • Facilitate proper and scalable L2-L2 transactions that are not constrained by liquidity requirements.
  • Create new farming opportunities, which offer LPs significant returns for providing liquidity on L2-L2 transfers — a brand new way of earning yield

Simulation Details:

We’ve built an internal simulation software composed of: a simulation manager in the form of a state machine (SM), a strategy for provisioning liquidity (under what conditions is liquidity moved), and a visualization piece which summarizes our findings. The SM calls the strategy in a set of discrete steps performing any internal vault-to-vault actions first such as redistribution of liquidity from L1 to L2 if needed. Then the SM provides the strategy with a set of liquidity movements, or trades, that occurred on that discrete index — call it t (which can be thought of as time, or bundles of transactions, e.g., on that day, or in that block). Supported by an L1 farming vault, the SM moves the liquidity around and charges a fee for this, which we collect. Following this, the SM moves to the next time step t + 1, and so on. Internally, a cache is passed between states to support time-dependent variables/dependencies such as, e.g., delays in liquidity movements between vaults/layers.

In phase I, we simulate trades as we believe they will occur, as drawn from a truncated Gaussian distribution with user-defined parameters. The mean specifies the amount of tokens moved on average, the standard deviation is how much this amount deviates from the mean, and the number of draws N is specified too. These represent the “retail” trades/swaps.

To simulate “whale” moves, we also allow the user to simulate n moves but distributed evenly across N time indexes (n<N). Whale moves generally put significant pressure on the liquidity system since they are, by definition, large relative to the available liquidity.

It is also important to emphasize the scalability of our implementation: The code is written to accept any strategy the user chooses to provide. The engine stays the same. In this way, it is “plug’n’play”. This has at least two main benefits: first, it is easy for the developer to maintain, and second, it allows for rapid massive parallel iterations of many strategies, increasing the throughput during the discovery phase when searching for the optimal ILE.

Specifically, imagine the ILE as parameterized via parameter set P. Examples of parameters are how many tokens to seed the vaults with initially, when to replenish the L2 vault from the L1 vault expressed in terms of the available liquidity, etc. We want to find the optimal distribution strategy p* ∈ P.

We will leverage optimization techniques to do this. In the brute force case, we simply try all possible values for p* ∈ P (which may or may not be possible). However, the idea here would be to leverage more advanced techniques such as Genetic Algorithms and/or Bayesian Global Optimization. The point, in this context of the LSE, is that the search for p* requires many calls of the LSE. Having the strategy and the LSE decoupled is key.

The fee model used by the LSE to charge for trades is modular too, and can be swapped out at will. Similar arguments for finding p* applies to finding the optimal fee model.

Results from the PoC and LSE are found by simulating a set of hypothetical trades. We also consider one L1 vault and two L2 vaults on two separate L2 tools.

We defined the following strategy to simulate with the environment, but note that the environment is not the strategy. The environment (i.e., the state machine, or the SM) uses and runs the strategy against the data — the two are logically decoupled.

In the basic strategy — used here just to showcase the s, the following are ensured:

  • Seed each vault with 10% of total initial capital.
  • If the liquidity in any L2 vault falls below 80% of seed: replenish from L1.
  • Never move liquidity from L2 to L1.

We ran this strategy in our LSE with the following linear fee model, which is part of a set of dynamic fee models:

  • If a trade size is ≥30% of available liquidity in the origin vault, charge a fee of 5% of the tokens moved.
  • Otherwise, charge at a linear rate defined by the straight line going from (0,0) to (30%,5%) where the first/second coordinate is the transfer size in percent of available liquidity/the fee amount in percent.
This figure depicts the fee model. The x axis represents the trade size in terms of the percent of available liquidity in the origin vault, and the y axis shows the corresponding fee in terms of the percent charged on the number of tokens moved.

Then, we created and ran the simulation by drawing trades from the truncated Gaussian distribution.

As the modeling gets more advanced, we can incorporate more moves and features. For example, we imagine implementing movement from L2 back to L1 which in turn would incur a delay (also called the exit time) for the user — which the LSE is set up to handle. This delay can be captured in the optimization problem when looking for the optimal parameter set for a given ILE: We can quantify the delay across all trades, and thus the new objective is to maximize the fee revenue while minimizing the delay for users, among other metrics. Similar metric combinations can be incorporated, quantified, and thus optimized for in a multi-dimensional optimization creating a Pareto frontier of solutions. We have just begun scratching the surface.

Key Findings:

The topmost figure depicts a plot of the liquidity simulation engine (LSE) incorporating the previously defined strategy, fee model, and trades. Whale moves are shown as dashed gray vertical lines. The subsequent two plots show the liquidity in each L2 vault. The bottom left plot depicts the fees charged (red, solid line) and the cumulative fee is shown (green, dashed line). The bottom right plot shows the fees collected from the trades in a semilog plot (to show the whale trades as well). The “regular”, or “retail”, trades are shown as the first large bump between 2 and 3. The whale moves are hard to see in the plot but are located between 5 to 6. All plots are shown against a hypothetical time via the time index (but can be thought of more broadly as discrete steps).

The most critical finding from our LSE thus far is that, running the simulation with the previously detailed 1000 simulated cross-L2 trades, which we assume is one week’s worth of trades, and $10 million in liquidity, approximately $1.6 million would be generated for LPs providing liquidity from L1. In other words, LPs would be positioned to earn an impressive 16% return per week for providing liquidity for cross-L2 transfers.

We also found that, starting at a total liquidity, post-L2-seed, of 6 million tokens, we end up near 2.3 million in the L1 vault. In other words: we do not deplete L1 in this case. Further, we found that whale moves put pressure on the L2 vaults and almost every time requires liquidity replenishment from L1. Sometimes this is not needed simply because the L2 vaults build up enough liquidity to handle the trade on their own.

Of course, we like to be very clear that this is simply just a simulation, and does not include real world data or liquidity — there is absolutely no guarantee of this return. However, it does appear that LPing for cross-L2 transactions is positioned to be a lucrative new opportunity for DeFi users. We will further test this model with data collected from our PoC connecting Polygon, Arbitrum, and mainnet; data from this infrastructure will be run through the simulation to further explore the L2 liquidity provisioning opportunity.

Next, we are building out probabilistic capabilities into the tool: since the transfers are randomly drawn each realization of the LSE run will provide different liquidity evolutions versus time. We can thus re-run the LSE many times and collect all curves. From that, we can express the percent probabilities that the L1 depletes, that a given fee revenue is met, that an L2 vault depletes, etc.

To test this model, we will be launching a POC, allowing users to swap between Polygon, Arbitrum, and Mainnet.

The Polygon-Arbitrum PoC will act as a fully experimental approach for us to gather information to augment our existing simulation results. This will be critical in providing us with more realistic data to help us determine the best approach to take in resolving the liquidity management hurdle that has arisen across L2.

With our upcoming launch of our cross-layer transferral infrastructure’s PoC, we will have the real-world data needed to further refine our model and determine the optimal means of liquidity balancing across L2s.

Stay tuned for this launch.

If you are a developer with a project you think fits our ecosystem and goals, and you would like to participate in our interactive testing landscape at Composable Labs, reach out to me on Telegram at @brainjar.

--

--

0xbrainjar

Composable Finance Founder & CEO. I write about R&D at Composable Finance. Physicist by training