I gauged Twitter sentiment evolution towards some key DeFi protocols

Twitter sentiment evolution towards FEI (10k tweet over the period Jan '21 - Sep '21)

Twitter sentiment evolution towards Uniswap (310k tweets over the period Jan '21 - Sep '21)

Twitter sentiment evolution towards Compound proto (128k tweets over the period Feb '20 - Oct '21)


@rleshner

Twitter sentiment evolution towards AAVE proto (256k tweets over the period Nov '19 - Oct '21)

As you can see from the intensification of fluctuations real interest skyrockets witht the start of DeFi summer (239k out of 256k tweets in the period Jun '20 - Oct '21). If you zoom in, you see details better.

An autoencoding BERT derivative language model trained on the corpus of tweets, then fine-tuned on a different corpus of tweets with sentiment labeled by human annotators, then Twitter feed scrapped with inclusion/exclusion @ # $ and some others for a particular protocol. Then run scrapped Twitter feed through language model, then for each data point in the time series calculate rolling mean with a heuristic window (a sweet spot: less – too much noise, more – too little details).

I’m working on the tool for protocol simulation and exploration in a coherent unified way, community model being part of it. It could then be used to run stress tests, assess protocol risks, classify protocols and tokens, detect user behaviour patterns, make forecasts about certain protocol KPIs like TVL and in many other ways as a decision support tool for more insightful, informed decision making. Will publish code, comments and roadmap.

More to come: learn.klimchitsky.com

5 Likes

It’s gonna be a tool for protocol analysis in the vein of what Gauntlet is doing, but based on a set of DeFi-native premises:

  1. Gauntlet is running their simulations with agent-based models representing users interacting with a protocol. A model of a user is based on a set of theoretical assumptions about user behaviour patterns. This approach was developed for TradFi, where the bulk of real life data is either not digitised at all (much of the b2c interaction happens offline) or isn’t available (much of market data is private). Hence, agent-based modelling with theoretical assumptions about incomplete data is justifiable. However, for DeFi agent-based modelling is a suboptimal legacy framework. Since all data about transactions and user interactions with the protocol is open and available for modelling, we can learn from real life data a model of a living protocol, or parts of it, and models of user interactions with it. Moreover, this model will be continuously fine-tuned with new data emerging.
  2. Transactions model is only half of the story. The other half is community sentiment manifested on twitter, discord and discourse. In offline economy inflation expectations and consumer sentiment influence consumer behaviour and central banks of the world when modelling national economies gauge it with polls. In DeFi we have the luxury to model community sentiment not with approximating polls, but again with real life data, while constantly fine-tuning the model.

We can learn from real life data models of onchain activity & of community sentiment and then merge them to get a true-to-life DeFi-native model of a protocol, which can be constantly fine-tuned. Then it can be used to build tools for explorable + explainable DeFi: running stress tests and alternative scenarios, classifying protocols and tokens, detecting user behaviour patterns, making forecasts about certain protocol KPIs like TVL and that of partnering protocols.

3 Likes