Longtail Token: Block Rewards for Evolution Strategies in Financial Machine Learning

Example of Financial Forecast Data

Introduction

Longtail Token

Nodes that perform model training and model validation are rewarded with network tokens. Network tokens are bought back from the market by an automated fund that reflects the intelligence of the market predictors. As the fund’s performance increases, it will purchase more longtail token off of the market, thus increasing the price. Thus, we have a direct positive feedback loop from miner incentive to perform better and the value of their reward. Future research may apply tokenomics practices towards modelling the constraints of the network incentive ecosystem.

Model-driven proof of work environment gives an incentive to miners to perform the work better. In this case, nodes that produce high-yield models are incentivized by network tokens. Tokens are redeemable for a share of the network fund. Highest achieving market predictors are used by the network to manage a portfolio of decentralized assets. The greater the performance of the network, the higher the value of the token. The higher the value of the token, the higher the incentive for network performance.

Bonding curves may be used to create a quantified expectation growth curve based on an auto buying-selling mechanism which is common in current decentralized applications like uniswap, in which a fixed ratio liquidity pool is held between assets which allows for volume-proportional stability of price. Apply a bonding curve to that mechanism and you have the tools for a stable coin, or more interestingly, a quantified expectation growth curve.

Motivation

TPOT worked so well for us that we wanted to run bigger and bigger learning experiments with more data. The more data that we feed it, and the more training generations we allow it to run, the more sophisticated the predictors get.

But, the problem with TPOT is that it takes a long time to run because it runs a lot of simulations. You see, these simulations are very expensive. In fact, TPOT is deploying many generations of machine learning models. One generation at a time, it mutates a population, evaluates their performance, and then combines the higher performers into the next generation.

So I needed TPOT to synchronous its training across multiple computers. So I made a blockchain!

It’s an AI-driven blockchain, with humans in the loop. We are a team of engineers, designers, developers, and servers who want to bring this generation into the next generation of optimization for the planet. It’s through our optimization strategies that we can bring balance to the world. That’s why at longtail financial, we are looking particularly for industry partnerships in sustainability, energy technology, renewable materials, agriculture, as well as indigenous leadership.

You can read more about our investigation into sustainability and decarbonization in Shawn’s essay on financial land management for decarbonization.

A desire to scale lead us to two approaches 1. Using Amazon EC2 instances to massively parallelize the training process and 2. Synchronizing the training process across different machines. 3. Standardizing the data formats and performance metrics for training. With this infrastructure in place, we quickly realized that we had laid all of the groundwork for the implementation of a revolutionary proof of work algorithm to train highly useful machine learning models, in this case, predicting financial markets.

Technology

Standardized Financial Time Series

Standard training data must have the following format:

[datetime, sector, broker, symbol, type, data…]

In this way, nodes can compile as much data as they desire to achieve higher model scores. A standard train set will be initially published, but it is expected that node operators will pursue deeper and more satisfying data sets to increase model performance. Additional features may be appended to the training algorithm for higher performance to be achieved. This will create a marketplace for private datasets to be marketed to supplement prediction algorithms.

Distributed TPOT training

Model Selection

Distributed Model Validation

In the case that a node produces a better generalized model than is recorded on the public model record, the node may submit the untrained model to the network in the form of a pipeline. A pipeline is actually a python module. A python module is a text file that ends in the extension .py and can be interpreted by a python interpreter. The hash of a pipeline is used as the hash of the minted block. , and the pipeline can then be validated by validation nodes.

Specific, Relevant, and Generalized Performance Validation

Tokenomics

Tokens will be awarded to nodes that submit higher-performing prediction models. Tokens will be backed by an actively traded basket of digital tokens. The basket of digital tokens will be actively managed according to the predictions which are output by the currently highest-scoring market predictors. A node that has produced an active predictor will continue to be compensated as long as its predictor is used by the network to manage the network fund.

Tokenomics modelling software CADCAD will be used to model possible bounds on the price of LTFT. Inputs include, number of nodes, compute available, price of EC2 instances, dynamics of the market, advancement of AI techniques, token user speculation, predictor performance, agent performance, meta-agent performance. One might expect market predictors to perpetually increase in performance, however, given a crash in the value of LTFT, certain nodes might choose to stop contributing, resulting in market dynamics out-pacing the aggregate intelligence of the nodes.

Network Blocks

Network Incentivizes

The Network Fund

Every ten minutes, the network fund will refresh its head block, and the current set of predictors will be used for environment creation. Nodes are incentivized for every head block that their submitted model is a part of. For the duration of the 10 minutes that the current HEAD block is selected, it’s predictions will be stamped on-chain, and used as a forecasted oracle of market behaviour. Each individual model in the HEAD block will make predictions for its relevant sector, broker, and asset basket.

By doing this, we are closing the loop on self-performing AI. We are unleashing the world’s compute power into orchestrated, objective-driven AI that will incentivize the perpetuation of itself. This will be the first step toward an intelligent blockchain.

Since the markets display novel emergence and metadynamics over time. Machine learning predictors will change in their performance over time, and, likely, there will always be a fleet of predictors coming online, and elder predictors falling offline. This enables an emerging meta economy of machine learning predictors to be the backbone of a truly AI-driven economy. So enough with the metaphysics, let’s get down to the implementation and testing.

Human-in-the-loop

FAQ

Why Blockchain?

What are the components?

What does a block look like?

Yogi Hacker