Liquidity Forecasting in Mosaic: Part III

0xbrainjar
7 min readDec 17, 2021

--

By Jesper Kristensen and 0xbrainjar

The ability to forecast liquidity values in Mosaic’s network vaults helps improve the overall system’s performance and our user experience. This improvement comes in support of larger transfer sizes and more successful transfers.

We introduced the Autoregressive Integrated Moving Average (ARIMA) model and hand-picked some parameters to explore liquidity forecasting in part one and part two. We also introduced other forecasting models such as Holt’s linear trend (HLT) and Holt-Winters’ seasonal method (HWS).

In this work, we will accomplish two main things: first, we will implement an automated way to determine the parameters for our ARIMA model. Instead of hand-picking these parameters, they will be computed automatically for each new moving window over our data. Second, we will compare our solution to the auto ARIMA Python package (ported over from the R language).

Automated model selection for ARIMA

The purpose of automating the model selection for the ARIMA order parameters is so that users do not have to manually conduct a trial and error approach when a dataset is provided.

Below we present a list of criteria that we have used to identify the optimal order of differentiating (d) in the data, as well as the orders for the autoregressive (AR) and moving average (MA) terms (p and q, respectively) for our ARIMA (p, d, q) model.

Identifying the order of differencing in the data

The first step in fitting an ARIMA model is determining the order of differencing needed to “stationarize” the series. Usually, the correct amount of differencing is the lowest order of differencing that yields a time series that fluctuates around a well-defined mean value whose autocorrelation function (ACF) plot decays fairly rapidly to zero. This decay can come from either above or below.

Although most of these characteristics can be observed by simply looking at the differenced data plots, to automate our model selection procedure, we work primarily with the autocorrelation function. The first rule that we apply is that if the series has positive autocorrelations out to a high number of lags, we increase the order of differencing by one. If the series still exhibits a long-term trend, lacks a tendency to return to its mean value, or its autocorrelations are positive out to a high number of lags (for instance, ten or more). It needs a higher order of differencing.

To help indicate that the time series might be overdifferenced, we can observe a lag-1 autocorrelation below −0.5. To apply these two rules in practice, we fit an ARIMA (0, d, 0) model. This model has no AR or MA terms; it only has a constant term that provides an estimate for the mean of the data when trained. Thus, the residuals of this model are simply the deviation from the mean.

Once we identify a sufficient d, the autocorrelation function drops to small values past lag-1, then we compare the resulting model with an ARIMA (0, d+1, 0). Assuming that lag-1 autocorrelation does not fall below −0.5 (which would be a sign of overdifferencing), if the d+1 model exhibits lower standard deviation values, it is preferred over d. Otherwise, we keep the original model and proceed with selecting optimal p and q orders.

Identifying the AR (p) and MA (q) orders

To identify the number of autoregressive and moving average terms, we proceed as follows: for the number p of AR terms, we set it equal to the number of lag terms that it takes for the partial autocorrelation function (PACF) to cross the significance limit. For the number q of MA terms, we use the autocorrelation function (ACF) instead. This is also set equal to the number of lag terms that it takes to cross the significance limit.

Optimizing ARIMA for our liquidity simulation environment (LSE) data

After running our model selection algorithm every time we shift the time frame 10 time steps ahead, we find that in all cases the number of AR terms varies from 3 to 6 while the optimal number of MA terms is always 1 for both datasets 1 & 2. Next, we compare the predictive performance of these models with the results we obtained previously using p=q=20.

The root mean squared error (RMSE) plots across 168 days of forecasting and the two standard deviations on the last day of forecasting ($t^* = t + 168$) are displayed below.

Although the RMSE values appear to fluctuate within the same range for both optimal and previous models, the standard deviation errors seem to be more stabilized for the optimized models and, in many cases, achieve lower values. Forecasting results for both models using the same training data are also provided below

ARIMA model selection and forecasting on the proof of Concept (PoC) data

Once our framework has been validated, our ultimate goal is to use it for forecasting on real-world data. Here we are using our LSE to code our available dataset to generate a set of synthetic data points that have been sampled from empirical distributions built upon our PoC data. In particular, we are working with transfers from our Polygon (POL) vault to our Arbitrum (ARB) vault and vice versa.

The transfer data between POL and ARB consists of 541 transfers from ARB to POL and 602 from POL to ARB. Each data set is used to build an empirical distribution using the Kernel Density Estimation (KDE) technique with Gaussian components and an optimized bandwidth. This is then used to resample transactions. A set of 1000 simulated transactions is drawn from each KDE-based density, and those are the assumed transfers in a period of 1000 hours. The histograms of both real-world and simulated transfers are available here:

The simulated data is then used to identify the optimal ARIMA model that we can use to fit it. These figures show the ACF and PACF corresponding to the first 200 time steps of the time series, where it is apparent that p = 5 and q = 1 are the optimal choices.

First-order differencing of the data is again necessary to enforce stationarity; therefore, ARIMA (5, 1, 1) is the optimal choice. Applying the same algorithm to all available time frames of this time series gives us a varying p from 3 to 6, and q = 1 remains the same throughout history. The forecasting performance of this ARIMA model is shown below compared to Holt’s Linear model.

The black point on the graph shows when the ARIMA model predicts that a 120% liquidity level is reached in the vault by conservative estimates (the upper confidence level). The purple point shows when the HLT model predicts the same 120% liquidity level. While both predictions can be used to trigger a replenishment event in advance, the ARIMA model appears to provide the most conservative estimates due to its ability to provide nonlinear trends as well as account for confidence intervals.

At last, we compare the performance of our model selection algorithm with the baseline open source code auto-arima to ensure the credibility of our results. As opposed to our ACF and PACF criteria for model selection, auto-arima makes use of the known information criteria, namely Akaike (AIC), Bayesian (BIC), and Hannan-Quinn (HQIC).

All three criteria are known to include penalty terms for the model complexity; therefore, it is expected that the simplest models will always be preferable when those are used for selection. Both codes apply one differencing on the data once to ensure stationarity. In a one-on-one comparison of the two codes on multiple datasets corresponding to different time frames, we find that the order of differencing is in agreement.

Our code identifies one MA term as necessary while auto-arima always returns 0. At last, the optimal number of AR terms for the two approaches are plotted and shown here:

In this graph, you can see the optimal number of AR terms in ARIMA models according to our code and auto-arima. It can be seen that in most cases, the AIC criterion results in an ARIMA (0, 1, 0) model with an intercept. With this understanding, a linear model suffices to describe the data, while our approach results in more complex models with nonlinear trends.

Here is an instance comparing the two models and their forecasting performance:

Conclusion

With this work, we accomplish two key objectives relating to liquidity forecasting for our Mosaic system: first, we develop a method for automating the ARIMA model selection removing the need for manual operation. This automation is crucial to have in the deployed production system. Second, we compare our work with out-of-the-box methods (auto-ARIMA) and find a great match confirming our implementation.

Next, we are developing and implementing AI- and ML-based methods into the liquidity forecasting capabilities. This way, we have both classical and AI-based methods. We will implement a two-tier system where we leverage information from both to make the most informed forecasting and rebalancing decisions. As a result, we can provide optimal predictions of liquidity availability along the Composable Finance infrastructure, powering efficient transactions and helping users make informed decisions with their finances.

If you are a developer with a project you think fits our ecosystem and goals, and you would like to participate in our interactive testing landscape at Composable Labs, reach out to me on Telegram at @brainjar.

--

--

0xbrainjar

Composable Finance Founder & CEO. I write about R&D at Composable Finance. Physicist by training