Instrumental Finance: LP Yield Optimization and Portfolio Management

0xbrainjar
5 min readNov 25, 2021

--

By 0xmoonscraper and 0xbrainjar

Instrumental Finance opens new opportunities in the expanding Decentralized Finance (DeFi) world through its chain- and layer-agnostic portfolio management solutions. This piece showcases that we can compose a portfolio which can automatically rebalance and earn higher yields. Prior to creating this portfolio, we produced a Portfolio Development Environment (PDE) which allows us to synthesize, backtest, and deploy new strategies. Here, we will dive deeper into the details on how this works and why it is important.

LP Position Portfolio Management

Liquidity provisioning (LPing) is the act of providing liquidity to DeFi pools. In return for providing liquidity, LPs (we will call them users) receive yield from the trading activity that happens in the liquidity pool they deposit into. Due to the currently sharded nature of applications, similar pools may exist on different networks. For example, the tricrypto Curve pool may exist on both Arbitrum and Polygon.

We discovered, through our initial analysis utilizing the PDE, that LPing into the Arbitrum pool while never moving the funds did not result in maximized profit and loss (PnL) or return. As described in 0xbrainjar’s article about pool yield arbitrage, we can do better.

To address this shortcoming, the Instrumental portfolio consists of a set of LP positions, and we utilize state-of-the-art infrastructure to monitor the performance of each relevant pool. In order to maximize yield, we use our PDE to retrieve data from the various networks, update the strategies, and take the necessary actions when needed, such as moving an LP position from one network to another. Through our research, we have developed a set of alpha strategies which are combined into an overall portfolio to achieve certain processes.

Our PDE is modular and written mainly in the Python programming language. It currently consists of a strategy, evaluator, and visualizer. The strategy depicted was developed and backtested in the PDE, and the visualizer was used to produce the plot below. Note: our PDE is capable of creating many other views of the performance.

This figure shows one strategy where the PnL was obtained from a competitive crossing moving average system and deployed across networks for Curve LP pools. This PnL would correspond to more than 30% APY; as always, past performance does not guarantee future returns.

Automated Portfolio Rebalancing

Our rebalancer checks for underlying changes in pool activities at a cadence of one hour and makes a set of transfer decisions based upon this. The cadence can be changed in case the transfer activity increases in frequency. The rebalancer continuously collects data and the current network snapshot is captured in a graph format which is stored to a database. The bridge aggregator computes how to best transfer funds between networks and actions are taken depending on those recommendations. Below is an overview plot of the rebalancing system.

In our rebalancing system, data is obtained and models are built with graph insets shown emanating from each circle via dashed black lines. Networks (the nodes/circles) are connected by edges (shown as black full lines) by our bridge aggregator — which includes Mosaic. If necessary, actions are taken by the rebalancer (depicted by the red arrows with text).

The transfer depicted in the graphic above has a cost. The funds are only moved if the predicted, forecasted (shown as the red shaded regions — aka confidence intervals — in the plot inserts), or simply detected activity in a pool is greater than the transfer cost. The activity we refer to is the trading activity measured by the USDC yield to the user of the traded funds.

For rebalancing, we utilize several forecasting methods and technologies — some of which we discussed in a previous post. Because we have advanced further since the time of writing that article, we are now including automated parameter computation, we have replaced the Akaike selection criterion, and incorporated our learnings from Mosaic’s real-world PoC data into our ML-based forecasting process.

The immediate roadmap for improving our forecasting capability involves exploring AI-based approaches, one such avenue being the long short-term memory (LSTM) model.

More on the Portfolio Development Environment

In essence, this PDE is a state machine that is written in the Python programming language. It starts at a point in the past and sees the data in a point-in-time fashion to avoid any look-ahead bias. Then, the strategy evolves forward in time and updates as new data arrives.

We previously mentioned that our PDE is used to create, test, and deploy strategies. To go more in depth, the PDE is capable of running any strategy and backtesting it against prior pool performances allowing it to confirm its performance against historical data. Nevertheless, it is important to ensure, as with any other trained strategy, to avoid overfitting.

As a reminder, constantly switching pools will result in fees eating away at your portfolio; therefore, turnover needs to be taken care of in addition to other metrics that should be accounted for. Statistical testing against the strategy is always an important factor to measure success. Without going into detail, we can perturb the parameters of the strategy to find the best ones while also looking for a robust response to this.

Using the PDE, we discovered several strong strategies to deploy including one utilizing a crossing moving average system to detect pool activity changes. It is important to note that we implemented the moving averages in a streaming way to avoid recomputing the moving averages on the entire dataset repeatedly.

Conclusion

Through our PDE built from scratch, we are able to present and execute an approach for developing alpha-generating strategies. Our yield-optimizing product leverages state-of-the-art technology under the hood while using new Mosaic-enabled methods for moving funds to where they will earn the most yield. All this is possible while abstracting away the complications of manually having to perform cross-chain and cross-layer yield optimization.

We will continue to work with dedication towards developing optimal chain- and layer-agnostic strategies to help DeFi users make the most out of their portfolios. As we uncover this research, we will share them with our community.

If you have any questions, please reach out to @brainjar. For all things Instrumental, follow their socials:

Medium | Twitter | Discord

--

--

0xbrainjar
0xbrainjar

Written by 0xbrainjar

Composable Finance Founder & CEO. I write about R&D at Composable Finance. Physicist by training

No responses yet