Commercial applications of Dynamo Coin

By Shaun Neal

Introduction

Dynamo is a unique cryptocurrency blockchain project which seeks to incorporate a number of unique technologies. Dynamo is first and foremost a proof of work blockchain secured by an ASIC proof algorithm.

Future development items for Dynamo include native NFT support, a native custom virtual machine, on chain voting and hybrid proof of work/proof of stake security. These unique features will allow commercial entities to develop novel business models.

Use case categories

Store of value: this is of course the trivial use case which is enjoyed by every cryptocurrency. Simply buying and selling crypto as an asset class is a basic commercial value proposition.

Transactional money: this also is one of the most frequently claimed benefits of crypto — the ability to send and receive money with very low fees and little friction compared to traditional money transmission routes. Dynamo has a rapid block emission schedule (15 seconds) and a large block size (10mb), so it is well positioned to allow for high transactional throughput for monetary transmission purposes.

Programmable money: this is where things start to get interesting. Note that not all blockchains can offer this benefit, only those with a native virtual machine can support smart contracts. Programmable money use cases include things like swaps, futures, time locked funds, interest earning deposits, event locked funds and anonymous lotteries. Basically anything that you normally think of a bank or securities firm as doing can be done on a blockchain trustlessly with no intermediary.

Asset ownership: this is the realm of NFT aware blockchains. Using NFTs one can track ownership of specific assets and trade those assets on marketplaces which are connected to the blockchain. Combining NFTs with a virtual machine offers very powerful on chain commercial uses, such as automatic auctions. NFTs also pair well with the gaming industry by representing collectable items for example. Finally, NFTs which store binary data can be used to represent any digital asset, such as a contract, an invoice, a video or audio recording, etc. and they can be transmitted easily and cheaply while retaining chain of custody — e.g. its similar to sending an email, however there is an immutable public record of how the asset changed hands. It is also possible to securely encrypt the binary data with a private key using symmetric encryption.

Voting: the utility of on chain voting is provided by governance token based blockchains. When combined with virtual machines and NFTs this can provide unique solutions in a variety of business cases.

Concrete use cases

Below are some examples of commercial use cases for Dynamo. Please note that these are presented for example only. They are not business or investment advice and any or all of them may not be legal in your local jurisdiction.

NFT marketplace — this allows buyers and sellers to meet and exchange currency for NFTs. This is typically done on a central exchange (i.e. website) however because Dynamo supports both NFTs and has a virtual machine, this can now be done in a totally decentralized way. The marketplace can exist as a decentralized wallet, as a website or even a 3D virtual world where merchants can buy or rent space.

Micropayments — due to extremely low transaction fees and quick confirmation time, Dynamo can be used to complete small dollar transactions in a peer to peer marketplace.

Gaming — by integrating crypto into online games, publishers can reward players with in game native crypto currency or allow them to purchase in game assets represented by NFTs which can then be traded on a secondary market.

Coupon tracking — retailers can issue single use electronic coupons which can be redeemed in store and are guaranteed to not be re-used and can be fully tracked.

Admission tickets — venue operators can issue NFT tickets which are redeemed for admission to an event. Redemption can be facilitated by a mobile wallet app which can only be unlocked by the holder’s private key.

IoT integration — real world devices can be connected to smart contracts or NFT transfer such that events can be triggered (e.g. locker unlock, dispense drink, etc) when funds are received, with no requirement on a trusted intermediary.

Software license — applications can be made blockchain aware and allow users to access licensed software based on NFT or smart contract proof of ownership. Smart contracts can also “dispense” software installation programs automatically when payments are received.

Supply chain management — for some applications where market participants are not known or there are large numbers of smaller participants, it is possible to develop supply chain traceability markers using blockchain. These can be made public or can be encrypted on chain with user’s private keys.

Conditional payment — through the use of data Oracles, it is possible release stored funds based on externally monitored events. For example, peer to peer sports betting may be facilitated by users depositing their funds into a smart contract along with their wager (over/under/points/etc). Then an Oracle monitors the agreed upon source of data and when the game is decided, the smart contract automatically releases funds based on the rules of the wager. This can be applied to all manner of financial scenarios which currently have to rely on a third party intermediary — insurance, wills, trusts, estates, etc.

Conclusion

Because Dynamo incorporates several novel technologies, it is uniquely positioned to provide commercial solutions to problems not solved by other blockchains. I have presented several theoretical and concrete commercial use cases for Dynamo which either enhance existing processes or create entirely new solutions.

Design of a trivial mining pool

By Shaun Neal

Introduction

Due to the massive growth in popularity of Dynamo mining, it became necessary to implement a mining pool. Small solo miners were no longer to mine any substantive number of blocks which was leading to centralization and frustration among early adopters.

Mining pools allow many users to group their mining hashrate together and then share in the proportional rewards. Participants do not get any specific block but rather a percentage of the pool that they participate in. There are many schemes which can be used to divide the pool, however I will only cover the method I used in the dynamo pool.

Source code for the mining pool and mining software as well as pool installation instructions can be found on the Dynamo website at https://www.dynamocoin.org/

Design

When mining solo, a miner connects their software to a full node. The miner requests a block of data to mine and then sets about solving the cryptographic puzzle. Once an answer has been found, the miner submits that to the network for acceptance, and if the miner is the first to solve the puzzle, they are awarded a miner fee.

The mining pool works in between the miner and the full node as a sort of proxy. The steps are generally:

1 — Miner requests pool information, e.g. what the mining pool wallet is, to be used later when assembling the block.

2 — Miner requests a block to mine. This causes the mining pool to request a block from the full node which is relayed to the miner with one small change. The mining pool works on an easier “difficulty” level than the network. As an example, the current network difficulty may be 500. The pool will adjust the difficulty to 50 before sending the block on to the miner.

3 — The miner will set about mining the block at the new lower difficulty and finding a solution. Due to the way the solution is selected (e.g. at random), it is possible that it will be not only under the lower difficulty, but also under the network difficulty. As an example, consider a game of dice where the object is to roll two 6 sized dice under a total of 5. Anyone who rolls a sum of 2, 3 or 4 wins the prize. Now imagine a broker who hires 5 people to roll dice on his behalf. He tells them to roll under an 8. Some of those rolls will also be under 5 and thus the broker will get the prize for some of the rolls. This is the same basis that the mining pool reduced solution works under.

4 — The miner finds a solution and submits it to the pool. The pool records the fact that the miner found a solution and, if the solution is less than the network difficulty, also submits the block to the network for a reward. The reward is paid to the shared pool wallet, not to the miner.

5 — Periodically, the pool checks the shared wallet to see if there are any funds in it. If there are, the funds are divided among the miners based on their share of the blocks submitted. So if there are 2 miners and one submits 40 blocks and the other submits 60 blocks, then they will get a 40/60 split of whatever coins are in the wallet. Of the 100 total blocks submitted, some small number will actually be winning blocks, say 3 — that is the amount that is split 40/60.

Implementation

I used c# as the language because it is a high level structured language, is cross platform portable and has many third party libraries. I used SQLite as the back end database because it is portable and requires no installation or configuration.

The server is a single executable which runs 2 main threads. The first thread runs an HTTP listener which services incoming RPC requests from miners. For each connection that arrives, a new thread is created and assigned a worker to service the request. The second thread is the distributor. It runs periodically and does the allocation of coins to miners.

The worker thread is fairly simple. It determines which method is being called and takes the appropriate action. In some cases the pool can service the request directly, in others it must interact with the full node. A simple webclient interface is used to connect to the full node.

The distributor is also fairly straightforward. It compares the current time to the last run time and interval and decides when it needs to execute. On execution it checks the shared wallet balance via RPC call, and if non-zero, sums up the shares for all miners and distributes them, again via RPC calls.

The main complexity in the pool design was the replication of the Dynamo proof of work algorithm. This was previously created using C++, so it had to be ported to C#, which was quite time consuming due to all of the byte order issues.

Conclusion

The dynamo pool is a simple yet fully functioning mining pool server which can be used by pool operators as a starting point for more robust commercial mining pool services.

Technical implementation of off chain NFTs with support for post-facto deletion

By Shaun Neal

Introduction

I have previously written about methods to store NFT data on chain so that assets can be manipulated by smart contracts in novel ways which will allow for content creators to monetize their original creations.

My naïve implementation suggested storing asset data directly on the blockchain, mostly due to ease of integration. This implementation suffers from a fatal flaw — automatic replication of objectionable or illegal content with no possibility for deletion. Full node operators need fined grained control over what they host on their own servers. The community at large needs the ability to censor objectionable or illegal content so that it does not propagate into the ecosystem and harm the project. The system needs to be designed to honor valid takedown requests.

With those goals in mind, I believe that off chain NFT asset data is most appropriate. However, this requires full node operators to agree to store a complete copy of the NFT database, and an appropriate incentive structure needs to be be supported. To that end, I will propose a “storage miner” which will apply for a shared reward at most once per day.

Technical Implementation

I propose that the following on chain commands be implemented to support NFT storage and management. Each command will be embedded into an OP_RETURN signature, accompanied by any relevant fee.

1 — CREATE ASSET CLASS: this will create the asset class meta data to store assets. An asset class hash will be generated on chain once the transaction is mined. The asset class data will be stored in the local full node database where the transaction is created.

2 — CREATE ASSET: this will create an individual asset within an asset class. The asset may have an accompanying data file. An asset hash will be generated on chain once the transaction is mined. The asset data will be stored in the local full node database where the transaction is created.

3 — ASSET CLASS TAKEDOWN: this will signal to all full nodes that an asset class, and all its associated assets, should be marked for takedown/deletion. Implementation of this command is optional.

4 — ASSET TAKEDOWN: this will signal to all full nodes that an individual asset should be marked for takedown/deletion. Implementation of this command is optional.

Asset class data and asset file data will be replicated with the blocks that create them as part of the normal peer to peer communications in the Dynamo network. Full node operators can optionally choose to implement the NFT storage database. If they do not, asset file data will not be replicated to their node. Full node operators must make the choice at genesis — the option cannot be later enabled, because the full NFT chain must be built from the start.

Each full node which replicates the NFT database will assign a unique hash that encrypts their local copy.

Full node operators which choose to store the NFT database will be able to apply once per day for a storage miner reward as follows:

1 — The coinbase transaction of every block mined will include an additional output for storage miners to claim. The output will be sent to a contract for storage.

2 — Storage miners will call the contract to register their node by passing their database encryption hash and wallet address. Each encryption hash must be unique — duplicate hash registrations will be denied.

3 — The contract will mine an output specific for that registration which will indicate a byte position in a specific file that must be retrieved.

4 — The storage miner will read the contract and retrieve 4 sequential bytes from the indicated file and byte position and post that data to the contract.

5 — If the retrieved bytes are correct, the contract will award a portion of the pooled storage rewards to the storage miner.

Full Node Control of Stored Content

Using this mechanism, objectionable or illegal assets can be removed from the blockchain ecosystem in an automated manner and full node operators are free to honor the takedown requests, or not, depending on their personal preference, jurisdiction, etc.

Any takedown command must be signed by a private key. It therefore becomes trivial to create a list of trusted public keys and ignore (or at least review) all others. Community based sourcing of trusted public addresses can occur which would allow major IP holders or law enforcement to identify themselves and be placed on the trusted list.

Note that full node operators are not “distributing” content in any real sense of the word — the blockchain simply contains the data. Real harm occurs to individuals or entities when the content is actually sent, viewed, etc. It is expected that consumer based service providers will run full nodes to make the blockchain data easily accessible to non-wallet users (e.g. hook up a web server and database to the blockchain, index the data and make it searchable, viewable, etc). These service providers will likely create automatic content filters and mechanisms to honor takedown requests.

Some full node operators may be concerned about storing illegal information for any time at all, for example, during initial block download. Mechanisms can easily be built to cache and download current takedown lists and then simply suppress those UTXO as they are received.

Controlling entry of illegal content into the mempool can also be implemented. Full node operators who offer upload services may choose to implement content filters or review processes before accepting any transaction. Full node operators can blacklist specific public keys or IPs and simply not accept or relay those transactions. A mechanism can be developed to pass blacklist information on chain for known bad actors. Miners can make use of the same information and simply never mine transactions which contain illegal material or material submitted by blacklisted addresses.

Conclusion

I have presented a novel solution which allows for NFT asset storage fully contained within a crypto ecosystem that provides a mechanism to honor takedown requests on a per-node basis and rewards full node operators who chose to store NFT asset data.

The implementation of this technology will be posted on the Dynamo Coin github. 

Implementing NFTs on the Dynamo blockchain

By Shaun Neal

Introduction

NFT’s (non fungible tokens) have recently gained popularity as an alternative asset class, particularly among the art world. What seems to be currently lacking is a coherent NFT technology platform which allows for on-chain storage of artwork files and a consolidated, simple interface for market participants to meet and exchange NFTs in a truly decentralized way. Most solutions involve storing files off-chain and most markets are centralized commercial entities. Finally, I was unable to find any blockchain technology which allows for NFTs to natively participate in smart contract functionality on chain.

This article proposes to solve these issues by incorporating native support for NFT files on chain and making the Dynamo virtual machine NFT aware through extended opcodes. I will publish a subsequent article on the technical implementation of this once further discussion on the details has occurred.

Mechanics

This section focuses on artwork use cases which are generally either image files, short movie clips or audio files for music. This assumes the maximum file size will be less than the maximum block size in Dynamo — 10mb.

Creating an asset class — this is the action of creating a master asset and assigning its properties — name and number of tokens. An example asset could be “Diamond Sparkle Dog” with 10 tokens. Each Diamond Sparkle Dog in the series could be a unique image by the author put up for auction (for example). NFTs will not need to have associated files, such as if a company wanted to track licenses on chain. They could issue 1,000,000 NFTs with sequential or random license numbers to be distributed to users upon purchase.

Creating an individual asset — this is the action of minting a specific asset within the asset class — Diamond Sparkle Dog #6, for example. The asset can include metadata, such as the specific name, a URL, arbitrary text and a file. The asset will also be assigned an owner, which will initially be the creator. The owner will be recorded at the time of mining the block that creates the asset and will be saved as the bech32 public key address of the user. As noted above, the total size of all NFT data will be limited by the Dynamo block size of 10mb.

Transferring an asset — this is the action of sending a specific NFT from one user to another. The ownership metadata is changed at the time of mining the block that includes the asset transfer command.

Integration with virtual machine — this will allow for powerful peer to peer, anonymous interaction with NFTs on chain. As described below this will create very unique new business possibilities enabling individuals to profit from their own creations directly with no intermediary.

The above functions should all be implemented in a wallet. The wallet should allow for creation of the asset class, creation of individual assets and addition of metadata, asset files and asset transfer.

Tokenomics

Each of the above actions will incur on on-chain gas fee which will be shared by the miners and stakers. Because the addition of NFTs could dramatically increase the size of the blockchain, it may be necessary to create an additional incentive for full nodes to store the NFT database. This could be done as an additional, separate, staking reward based on time online, amount of database stored, etc. The theoretical limit of the database size increase is 53GB per day, although if this technology becomes widely adopted, it’s likely that the price (in $USD) to mint many large assets would be prohibitive and blockchain abuse will be mitigated by market forces.

Note that there is no specific “fee” for creating an asset class, creating an individual asset or transferring ownership. The cost for these activities is the native chain gas fee which is measured in DYN per kb and is set by market forces.

Business Use Cases

The true power of blockchain technology is the facilitation of peer to peer, anonymous, trustless transactions. By combining on chain assets, NFT on chain tracking and a native NFT aware virtual machine, this vision can be realized in practice.

Use Case #1 — Online auction. The creator of Diamond Sparkle Dog can create a smart contract which accepts bids for a certain duration and then automatically transfers a specific asset to the winner at the end of the auction. Bidders send bids to the contract which are stored for the duration of the auction. The winning bid is automatically transferred to the creator and the remaining losing bids are sent back to the bidders (less gas fees). This allows for a completely anonymous, trustless and intermediary free art auction that solely benefits the creator.

Use Case #2 — Online raffle. The Diamond Sparkle Dog raffle contract could sell a large number of small value raffle tickets and then randomly select a winner at the end of the raffle. The winner would receive the NFT ownership and the raffle proceeds would be sent to the creator. This is again totally anonymous and trustless.

Use Case #3 — Software license tracking. A software company could create NFT licenses and issue them to users which could be unlocked using the user’s private key. This would require integration with a blockchain and a light wallet to receive the keys, however it would be a convenient, trustless and anonymous way to secure software licenses. Licenses could be autonomously purchased from a contract and automatically generated and sent to purchasers. They could also be transferred among users or resold autonomously through other smart contracts.

Use Case #4 — Coupons. A promotional company could create a coupon smart contract which sends free coupons to token holders, via website request, airdrop, etc. The redemption of the coupon could be integrated into the wallet to verify the owner and then burned to ensure non-reuse.

Use Case #5 — Tickets. Similar to coupons, a venue operator can issue tickets for sale via a smart contract which users are then able to redeem using their private keys, or resell on a secondary market. This eliminates any possibility of fraud or ticket re-use and provides for anonymous, trustless sales.

Summary

I have presented a framework for integrating NFT support on chain natively and developing an NFT aware virtual machine that will facilitate the creation of NFT based smart contracts which will open many new possibilities for NFT usage not currently possible today.

I will shortly publish a technical implementation document detailing how this will be accomplished on the Dynamo blockchain.

The need for energy efficient proof of work algorithms

By Shaun Neal

Background

There is a recent (circa May 2021) focus on energy consumption metrics of various proof of work crypto coins due to celebrity tweets, pure misinformation, state actors with hidden agendas, and many people with purely good intentions for the environment. Many claims circulate with “this coin uses more energy than X country”, etc. These claims are generally valid, although they may be based on dubious sources. Clearly there is a need to increase efficiency of proof of work algorithms to free energy consumption for other uses and reduce greenhouse emissions.

This all boils down to one thing — deterministic memory bandwidth.

Analysis

There are a plethora of proof of work algorithms which range in complexity. Bitcoins SHA256 is on one end — extremely simple, almost zero memory usage and highly parallel. On the other end are script based algorithms like those used by Monero.

Point 1– ASIC mining fosters high energy consumption. ASIC manufacturers invest large sums in R&D to create products that need to be consumed at scale for an ROI. This means that their product must be widely deployed and thus, will naturally consume more energy than other solutions due to the sheer scale needed for profitability. ASIC based POW are de facto high energy consumers. Therefore, POW algorithms should endeavor to be ASIC proof — meaning that it will never be profitable for an ASIC manufacturer to invest in the requisite R&D to bring a product to market. This can easily be achieved by frequently changing the algorithm in a random way, on chain, as I have proposed and developed (links below). There is nothing technically special about this — build a mechanism to change the algorithm every 90 days and no ASIC manufacturer is going to look at your coin ever.

Point 2 — Memory access uses power. Algorithms which are memory bandwidth intensive (ETH, KAWPOW, etc) use alot of power. Why? Because the GPU compute unit / global memory interface is power intensive. Moving data takes energy. Any algorithm which builds a DAG (directed acyclic graph) takes alot of power and generates alot of heat. The rationale behind a DAG is that ASIC manufacturers will not be able to achieve significant energy/cost ratio gains over commercial GPUs, which has generally been true, but at an incredible cost — power and heat. Addressing ASIC consolidation with point 1 is much easier. Make the algorithm energy efficient and change it often, on chain, autonomously, without hard forks.

Point 3– Script based algorithms use significantly less power. By “script based” I mean algorithms which cannot be parallelized because they are deterministic — the input of operation N depends on the output of operation N-1. There is -no- way to build parallelization into the algorithm because it all must be executed step by step. In practice these algorithms use 5% of the power of memory hard algorithms that are constantly pounding global RAM. It is possible to make these algorithms GPU friendly by designing the RAM requirements to fit into local compute unit limits and using 32 bit friendly operations which can be optimized in GPU. For example, if an operation requires a 256 bit addition, the implementation can break the add into 8 discreet 32 bit adds and let the 32 bit adds overflow internally instead of globally. This can be easily optimized in OpenCL so that consumer grade GPUs still benefit.

Point 4– POW is superior to POS. Proof of work is the ultimate decentralized security solution. Proof of stake suffers from many centralization issues, such as “nothing at stake” attack, invalid slashing, centralization on compute platforms like AWS/Azure/GCD and ultimately, SEC intervention for violations as an unauthorized security. Proof of stake encourages those who will benefit from the ecosystem to stake, such as exchanges. This is the exact opposite of decentralization. A healthy security ecosystem has multiple parties at odds with each other competing for rewards, not a small number of centralized entities working together for their own benefit. Proof of work miners could care less about the users, and that’s exactly how security should work.

Conclusion

Coin developers should create custom POW algorithms which use the least amount of power possible and are ASIC resistant. Memory hard functions are inherent power consumers and should be avoided. Proof of Stake is an insecure solution which should only be used in conjunction with proof of work. Using a deterministic script based proof of work algorithm, it is possible to reduce energy consumption by an order of magnitude and still maintain high levels of decentralized security.

An implementation of a custom script based proof of work algorithm can be found on the Dynamo coin github which can be access from here: https://dynamocoin.org

Real world field testing of this algorithm shows that power consumption on an RX580 8GB card for Dynamo hash is less than 10% of the power consumption for Ethash.

Things I learned from writing a OpenCL GPU miner

By Shaun Neal

Recently I completed development of the reference GPU miner for Dynamo. This was a ground up development for a custom script based hash algorithm. I didn’t have any code to fork from or even review; this was starting from zero. I also had no OpenCL or GPU programming experience, although I did have extensive OpenGL exposure, which turned out to be really useful because OpenCL and OpenGL are really similar in their concepts.

The reference code for Dynamo GPU miner can be found here: https://github.com/dynamofoundation/dyn_miner This is a Visual Studio solution which can be built out of the box. Simply clone and build. You will need curl and opencl which can be installed with vcpkg. If you need help building check out our discord or telegram which can be accessed at https://dynamocoin.org

I initially thought writing a GPU miner was basically impossible, a task relegated to super coders. As it turns out, it’s relatively simple once you understand a few key concepts.

  1. OpenCL compiler is peculiar. There are many small caveats to the compiler that can waste a lot of time. Your only defense is printf. Use printf liberally to debug what’s going on. The drawback with Windows printf in OpenCL is that it adds a newline to every output, so you have to batch up your data to see anything meaningful.
  2. Program in parallel if possible. Because Dynamo is purpose built as a script based procedural hash language, this was not possible. Your algo may be different, and you want to use get_global_id() and get_local_id() to break up your work.
  3. Compiler errors are very funky. If you miss a curly brace it will tell you there is an error at the end of the file, not where the brace is missing. Use comments to prune portions of code and re-compile until you track down the offending line. Also, compiler failure is just reported as an error code, you need to dump out the debug log to see the actual reported errors and line numbers. The code in the reference miner shows how to do this.
  4. Compute units don’t really mean anything. For my test I used an RX580 which has 36 compute units. When I ran the miner with a global work size of 36 it was ok, but when I ran it with a global work size of 1,000 it was much faster. Something about the call to clEnqueueNDRangeKernel global work size parameter seems to have nothing to do with the actual number of compute units. I tried many different values and 1,000 to 2,000 seemed to be a good number for optimal performance. Try different values to see what works for your algo.
  5. Local or private memory made no difference for me. I tried several rounds of optimization by copying __global__ tagged buffers to __local__ or __private__. Nothing resulted in any performance increase. This may have been due to the highly deterministic nature of my algo.
  6. Type checking is lazy. If you pass a char to a uint you will end up with a mess. There is basically zero type checking in the OpenCL compiler. So char 0x89 turns into 0xFFFFFF89 as a uint. You need to write your own memcpy and big endian/little endian conversions. Again, see the reference code for examples of how to do this.
  7. Port from Windows to Linux is (almost) trivial. Once you get around the library includes and linkages, OpenCL works out of the box on Ubuntu. I ported my Windows GPU miner to Ubuntu in a few hours and had it running under HiveOS (Ubuntu 18) with basically no changes.

In summary, you can write your own GPU miner. This is actually achievable by mere mortals in a reasonable timeframe (mine took about 20 hours). It’s a great educational exercise. Feel free to drop into our Discord or Telegram if you want help building your own.

Technical implementation of smart contracts in Dynamo Cryptocurrency

By Shaun Neal

Introduction

This article will detail the mechanism used to create and execute smart contacts in Dynamo. Dynamo is a unique cryptocurrency implementation based on BTC fork, using a hybrid UTXO/account design. The primary design goals were ease of implementation, minimal security impact and ease of use. This was achieved by leveraging OP_RETURN data to secure contracts in the main blockchain.

Contract creation

Contracts are created by issuing an OP_RETURN command with a special preamble. This signature signals the full node that the transaction will create a contract. The transactions are submitted via the RPC interface like any other transactions and are stored, and relayed, in the mempool, until mined.

The mining software hashes the OP_RETURN transaction like any other transaction and returns a mined block to the full node for validation. Other than increasing the OP_RETURN relay size, no other changes to consensus or mining were required to implement up to this point.

The mining software returns the assembled block to the full node for acceptance. The full node recognizes the contract signature, validates the contract, stores the newly created contract in a local database and cache memory and then relays the block as normal. Any incoming relayed block will likewise be validated and stored. The contract address is deterministically calculated from the sender address and block hash, so each full node will assign the same address.

Any outputs which are sent to an OP_RETURN transaction are provably un-spendable and dropped from the mempool. The funds, if any, which are sent in the contract creation command are loaded into the internal contract balance. Sufficient funds must accompany the transaction to mine the contract.

Contract execution

Once a contract has been mined into the blockchain it can be executed. It is executed by sending outputs to the contract address. The execution command contains the name of the function to run and may optionally contain additional data which is exposed internally to the contract when it executes. The execution command contains a special signature OP_RETURN. Again, all funds are provably un-spendable and are therefore loaded into the contract balance (less mining and execution fees).

The execution command is loaded to the mempool where a miner will eventually retrieve it for execution.

The mining software recognizes the execution signature and runs the contract in a sandboxed VM. The RPC for getblocktemplate includes all necessary data to execute any contracts which are being called — contract code, current state and persistent data.

If the contract sends funds to any address, those UTXO are created as part of the coinbase transaction and the amount is subtracted from the contract balance.

Once the miner has completed a block, including all contract executions, the block is submitted to the full node.

The full node then executes all contracts stored in the block and verifies the state updates and coinbase transactions. If validation passes, the block is added to the chain and relayed to peer nodes. Those full nodes then perform the same validation of executing all contracts, etc.

State and persistent storage

Contracts have state and associated persistent storage.

Contract state consists of the current contract balance and the number of times the contract has been executed. The execution count is included in order to ensure that hash values update for each execution even if the balance does not change. Contract state is stored as an OP_RETURN UTXO data element with a value of 0 in the coinbase transaction of any block which contains contract executions.

Contract persistent storage is similarly created in the coinbase transaction, however only delta state is stored due to the potentially large amount of data.

The full node uses these empty UTXO coinbase transactions to verify correct execution (e.g. that it comes up with the same results as the miner). Once verified these transactions are relayed in the block to other full nodes which perform the same verification. These transactions also serve as a full audit trail of every action taken on the contract and are permanently secured in the blockchain.

The block size limit has been increased to 10mb to accommodate the anticipated additional data and the consensus time has been reduced to 15 seconds to improve throughput and limit the number of transactions per block.

Conclusion

I have presented a work-in-progress implementation of a fully UTXO based smart contract execution environment which makes minimal changes to BTC core and secures all contract execution and data using native BTC consensus mechanisms.

The reference implementation of the full node and mining software is available on github which can be found on the Dynamo foundation website https://dynamocoin.org

Implementation of an ASIC proof POW algorithm

By Shaun Neal

Introduction

In this article I will will detail the implementation of the Dynamo programmable proof of work system and its novel consensus based update mechanism.

Progpow is not new and many projects have somewhat successfully used it to deter, but not eliminate, ASIC mining. One notable exception is Monero, however this project went so far that they also eliminated GPU mining. I personally believe that GPU mining strikes a good balance between security and decentralization for reasons I have detailed elsewhere. My goal was to create a GPU friendly, but ASIC-proof POW algorithm. This algorithm is now incorporated into the Dynamo core full node and reference miner both of which are available on the Dynamo Foundation github repo.

Hash implementation

The progPOW implementation is composed of:

  1. A hash script language which provides for sequential execution of basic cryptographic and mathematical operations.
  2. A periodic auto generator which creates new algorithms to be voted on.
  3. Distribution of the approved algorithm on the blockchain.

This is a sample hash script, which is used to generate the genesis block in Dynamo.

The first line indicates the timestamp to start using the hash. The remaining lines are the hash script which provides for SHA2, integer ADD/XOR, etc. The script also allows for generation of arbitrary sized memory blocks and selection of random numbers from those blocks. The fullnode and the miner software execute the script for each block in order to create the block hash as part of the proof of work consensus. All of the operations are based on 32 bit data sizes in order to allow for optimization on GPUs. For example, the ADD and XOR functions operate on 8 int32_t structures rather than an entire 256bit structure and allow for 32 bit overflow as opposed to 256bit overflow.

New hash creation and distribution

The system is designed to periodically create a set of new random hash scripts and submit them for voting on chain. This will initially occur every 120 days. After the voting period is over, the script winner is distributed on chain via a special OP_RETURN command code. All full nodes will load the new script and start using it as of the effective timestamp. The entire history of hash scripts is preserved on chain and can be reconstructed by any full node.

Using this mechanism, the script autonomously changes and updates periodically, rendering any ASIC design pointless because:

  1. The script cannot be parallelized in hardware so there is no energy efficiency to be gained.
  2. The memory generation routine allows for arbitrarily large pools to be created, so if an ASIC is developed with some amount of RAM it can be bricked simply by generating a slightly larger pool if even for a short amount of time.

This design eliminates the need for hard forks and removes core developers, or anyone else, from the process of updating the progpow hash, thereby decentralizing the entire process.

Conclusion

The scripted hash function feature of Dynamo is a novel progpow hash which is GPU friendly, ASIC proof and completely decentralized and autonomous. The code can be viewed on the github repo here:

https://github.com/dynamo-foundation

Practical guide on modifying Bitcoin core to allow for multiple UTXO in the coinbase transaction

By Shaun Neal

Introduction

I am currently working on a crypto ecosystem called Dynamo which features a blockchain fullnode that supports hybrid POS/POW security and a consensus directed foundation. As part of the technical design, I needed to add multiple outputs to the coinbase transaction. In standard Bitcoin, there is only one coinbase output, the proof of work miner reward. In my design, I require three: the POW miner reward, the POS staker reward and the development fund allocation. Each block mined will be split on some percentage, to be controlled by the community, among those three groups. After many hours of research and development, I was able to properly implement this. You may find my implementation in the Dynamo github repo listed below.

Goal

Use Bitcoin core 21 or later

All transactions must be segwit

All transactions must be Bech32 format

Miners or stakers cannot “cheat” and submit non-compliant blocks

Steps

** The detailed changes needed to implement an additional UTXO appear in commit changesets in the Dynamo github repo dated 4/14/21. I have included some code below for illustration purposes, however I suggest you clone the repo and review the code if you plan to implement this feature for your own coin.

1 — You will first need to fork BTC core and make modifications to accommodate your own coin. There are many resources available for this task and it is beyond the scope of this discussion.

2 — Choose a fee split strategy. My implementation pays a fixed fee to the miner and the development fund (POS is up next for development).

3 — Create a CSript object for the new outputs. I chose to put mine in chainparams.h and calculate it up front during the initial chainparams creation. To do this I needed to modify GetScriptForDestination as it expects chainparams to already exist. An alternative strategy would be to create it “on the fly” as each block is submitted, however I felt this was inefficient as it would be creating the same script over and over for each block.

4 — Modify CreateGenesisBlock in chainparams.cpp to add however many new outputs you want.

Make sure to add the empty script at the end, this is the placeholder for the segwit txout that will be added later by the block creation routine. You want all your blocks to be the same format so that they can be easily verified.

  • Important note: do not put a valid payto script on any transaction on the genesis block. The genesis transactions cannot be spent and you dont want your dev fee transaction showing up in a wallet accidentally because it will cause all sending transactions that include it to fail.

5 — In miner.cpp you are going to modify BlockAssembler::CreateNewBlock to add your new output transaction(s):

6 — Finally, in validation.cpp you are going to modify CheckBlock to make sure that no one is cheating and submitting blocks without the dev fee. Note that no validation is done on the genesis block (by comparing the Merkle root). Basically you want to make sure that there are 3 txout (one POW, one dev fee and the witness script) and that the second one actually pays the right amount to the dev fund. When I add proof of stake, there will be a fourth transaction output here.

Note that any time you update any parameter in the txout of the genesis block you must re-mine the block and create a new genesis hash and Merkle root hash. There are numerous guides available on how to update the genesis block parameters and the code in the Dynamo repo has a sample code to do this as well in chainparams.cpp.

Conclusion

It is relatively easy, and safe, to modify the Bitcoin core code to create multiple UTXO on the coinbase transaction. Most examples I found used P2PKH addresses and scripts, however using P2WSH is more compliant, uses easier to read addresses and is more secure.

Reference

Dynamo core github: https://github.com/dynamo-foundation/dynamo-core

Acknowledgements

I would like to thank the developers who worked on Qtum and Lux as their code provided inspiration and guidance in my implementation.

Practical Implementation of One UTXO/One Vote Governance Voting in Crypto Asset Systems

By Shaun Neal

Introduction

Dynamo is a proposed crypto asset system which allows for autonomous , on chain governance via a 1 UTXO/1 Vote mechanism. Here I propose a method to track UTXO voting on a per proposal basis with minimal impact to system security and performance.

Design Objectives

  • Only allow a UTXO to vote once per proposal
  • A UTXO created after the proposal voting period has started cannot vote
  • A UTXO which has been transferred after voting cannot vote
  • A UTXO should be able to vote on more than one active proposal
  • System performance and security should not be impacted
  • Miners/stakers should be incentivized to process UTXO votes

Smart Contract Design

Each proposal shall consist of N smart contracts, each representing a vote choice. For example, a proposal may be a YES/NO choice, e.g. “Proposal #37: Change the coin logo from green to red”. Contracts would then be created for “Change #37 — YES” and “Change #37 — NO”. Holders of UTXO would then send their UTXO to one of the two contracts to indicate their choice. Any number of choice contracts may be created for any given proposal.

In order to vote, each holder “spends” their UTXO to the chosen contract. The contract validates that the UTXO has not previously been spent on this proposal(by searching the UTXO hash history) and validates that the UTXO was created before the contract was created. This ensures that a UTXO cannot be split after vote initiation and then voted twice.

Once the UTXO input has passed validation, the smart contract records the vote by simple addition to a counter of the number of UTXO spent. The contract then creates a new UTXO, payable to the senders public hash, in the same amount that was input.

Miners and stakers are rewarded from coinbase transactions for the mining/staking of blocks containing execution of the smart contracts and the underlying transactions. The cost of the contract execution is born by the community developer fund, such that there is no explicit “charge” to the UTXO sender for voting. This is done at the time of mining/staking by reallocation of the developer coinbase UTXO allotment to miners and stakers.

This design requires two important components. First, the contract must have an embedded identifier that links it to the proposal. Second the contract must have the ability to search the history of the blockchain for all UTXO transactions which have been paid to any contract with the proposal ID. For performance reasons, this must be kept in memory in the full node. Only active proposals need to be maintained, so the memory footprint can be minimized. I propose a B+ tree of order 16 for this purpose, although this will be subject to revision based on performance testing.

The entire process of smart contract validation is secured by my previously proposed hybrid POW/POS security model which makes both a 51% attack and a nothing at stake attack financially unfeasible.

One small drawback of this design is that if a holder intends to issue multiple votes for the same proposal, they must first “queue up” UTXO in the amounts they wish to split. For example, if Proposal #37 says “Change the coin logo to (pick one or more): green, yellow, red, blue”. If a holder wants to vote 10 coins to green and 5 coins to blue they must create those UTXO amounts before the vote period because UTXO created after the vote period cannot be used to vote. Likewise, any holder that wishes to vote their coins must do so before transferring them to someone else, for the same reason. Finally, coins mined after the vote period begins cannot vote.

Conclusion

This discussion presents a practical implementation for a 1 UTXO / 1 vote mechanism which can be created such that security and performance are not impacted. Further the design guarantees that users cannot cheat the system by voting more than once per proposal. A concrete implementation of this procedure will be created in the Dynamo asset system.