Composing a trust-minimised future

trust_minimised_future_with_blockchain

In his December 2018 article, Andreesen Horowitz crypto partner Jesse Walden discussed “4 years of blockchain computing: degrees of composability”, wherein he made the point that the platforms with the most potential are those where existing resources can be used as building blocks for higher-order applications.

Thus the most fertile ground for innovation is not where we need to build from scratch, but where we can stand on the shoulders of giants, so to speak, building on what has come before us.

In the case of distributed ledger technology, there is evidence to suggest that we are on the edge of an explosion in the innovation of truly useful applications as tools like Aurachain appear to remove key barriers.

 

The basis of blockchains

At their core, distributed ledgers can be thought of as where multiple computers come together to create a virtual computer where certain actions can be undertaken.

For example, with Bitcoin there are a huge number of “miners” who hold the state of the Bitcoin blockchain and ensure this state is verifiable, enabling the first trust-minimised peer to peer electronic cash system.

We have now seen a proliferation of new technologies and chains being built to extend some of the magic and utility in the Bitcoin blockchain, from the “decentralised world computer” of Ethereum to more permissioned systems such as JP Morgan’s ethereum branch Quorum or Hyperledger Fabric.

The low-level code powering many of these systems is reminiscent of the early days of computation, wherein anything could be built, but only painstakingly as lower-order functions had to be described each time. Indeed, many of these protocols resemble TCP/IP and other basic foundational elements either specified on one specific use case, or generalisable to everything.

Over time, we moved to higher and higher-order programming languages and utilities, until now you can create websites and even entire programs just by dragging and dropping components in.

There has been a proliferation in the use of standardised libraries, many of which are now open source for auditability and upgradeability, with much coding now less about working from first principles but instead using the right elements such as identity authorisation libraries like OAuth, or building websites on frameworks such as NodeJS.

 

Build but take caution

Just as we have built giant distributed systems and consumer architecture using these common libraries, increasing the ability to undertake innovation by removing the need to repeat work over and over, we are seeing similar standardisation in the crypto world.

Much of this has been in areas around the flow of value, decentralised finance in particular, with various components on the ethereum now coming into place to replicate existing financial structures in a decentralised, trust-minimised manner.

To show the power of this approach, Hayden Adams, the developer of the Uniswap Exchange Protocol, was about to build an exchange on track to $1 billion of trades in its first year on a budget of $100,000.

However, sometimes the flexibility of this new paradigm and the novelty can lead to potential issues. An example of this was the Parity code bug, where an issue with the programming of a popular ethereum wallet meant that $280 million worth of currency became permanently inaccessible as they could be made to self-destruct. This was particularly galling as the creator of the wallet, Gavin Woods, was also one of the original developers of the Solidity development language on which they were based, showing even the best of coders can be caught out some times.

This trend has been seen in the non-blockchain world too, such as the “Heartbleed” bug in the popular OpenSSL cryptographic library, but there is typically more fault-resilience in these systems as the data is not typically immutable.

 

The importance of the data model

For most applications of distributed ledger technology, fully decentralised, censorship-resistant data is not required.

The promise for many of the institutional applications, in particular, are focused around the ability of elements of the underlying data model to be approached in a trust-minimised manner.

This can be for members of an organisation, or a group of organisations versus the whole world, but means there needs to be less potential compromise on certain elements like true immutability, decentralisation overhead and others.

Having the anchor of the elements of the data model that exist on the blockchain or an alternate distributed ledger makes the potential use cases incredibly interesting. This is because it means that we can potentially move beyond the classic choices of structured or unstructured data to more flexible models inside and across organisations of moving information by having a shared state.

The libraries and templates that are being built to take advantage of this new design space may appear to be focused on one chain or another, particularly given the economic incentives that have been borrowed by many of the public chains from the Bitcoin model to bootstrap themselves with the premise that the value will be retrained at this base layer, but it is becoming increasingly clear that as long as certain functionality is achieved, the base layer can be largely abstracted away.

This points to increasing amounts of business logic and flow being occupied by low-code or even no-code environments as the degrees of freedom for information and value flow aren’t as wide as say for web applications or front end code.

Indeed, for a composable future, it is likely that there will be a range of specialised base layer distributed ledgers (or even combination protocols such as Cosmost) on top of which transparent, functionality-focused data structures and usable functions will operate.

The logic of these will interact with more centralised, local, functionality in elements such as identity, bridging on-chain and off-chain logic and interfacing with local data stores using privacy-enhancing technology.

The anchor of the data on the distributed ledgers will be the anchor against which previously un-automatable systems processes can be put in place, increasing productivity and reducing operational inefficiency.

While there will be value accrued at the base layer in this model, it is likely that the bulk of the value will migrate up the stack to those that can build these processes and use the right, constantly evolving, building blocks to scale these models at mass and for the highest value operations.

 

Conclusion

Development in the distributed ledger space can, at times, seem to be 10 times as fast as conventional software development due to the massive amount of brain and financial power currently directed at this sector.

We are moving rapidly from a period of low-level code and clunky interfaces straight to the “cloud” era of distributed ledger computing where logic and high-value creative solutions can be built using low-code or no-code environments.

Removing the friction from accessing these tools in specific verticals, in particular enterprise applications where full decentralisation and censorship-resistance isn’t required, will lead to an explosion of creativity and innovation over the next few years as, for the first time, we have the right building blocks almost in place to take this on.

 


 

Emad_Mostaque_portrait

Article was written by:

EMAD MOSTAQUE

Blockchain Thought Leader