History of bitcoin

4 stars based on 61 reviews

Replace transaction merkle tree with a Merkle-sum-tree. This allows SPV nodes to stochastically validate the subsidy in blocks by fetching a random leaf and then fetching its txins.

This way if you have a stream of utxo queries coming in, you can make the work of them mine for you. Validation then, is mining. If you don't have enough queries coming in you just make some up at random. Represent the script as a merklized abstract syntax tree. The P2SH address is the root. When spending the spender only may provide only the branch they are executing, and hashes for the unexecuted branches. This increases privacy and can compress long scripts on spend.

Pruned history Structure transactions so that the parts needed for validation txins, scriptsigs are separate from the output data scriptpubkey, output and fee values and put them in separate hash trees. All nodes fully prune all data more than a few thousand blocks back. Massive space savings and improvements in syncup speed. Massive security loss— an attacker that can construct a large reorg can steal all the transacted coin beyond a certain depth. Normative and committed merklized UTXO data structure allows full validation of current blocks by storageless nodes with SPV security Can be complimented by proof-of-misbehavior messages that show a block is invalid by packing up the tree fragments that provide the data needed to see its invalidity.

ZKP Validated checkpoints— Is it possible to use computational integrity to create compact constant size checkpoint proofs that show that a checkpoint was the result of a faithful validation of the blockchain? This could be used to give pruned history the same security as full Bitcoin up to the limitations of the integrity proofs. Chain folding If nodes don't actually need to validate old chain data because of committed UTXO and pruned historyit would be possible to 'fold up' the historic chain: Nodes which are validating just to gauge difficulty can skip the intermediate blocks.

This can be applied recursively. If the backpointers are randomized and every block is a candidate summary you end making the chain a merklized skiplist. Alternatively, do not store a UTXO set.

Instead encode the transactions outputs in the blockchain in a merkle mountain range an insertion ordered fully populated binary tree, setup to make appends cheap over the whole chain. Transactions are required to provide the update proofs that show their inputs in the tree and thus also allow you to null them out.

This means that fully validating nodes and miners can be basically storageless, but wallets must take on the cost of remembering their own coins.

A transaction is mined but it isn't clear which inputs its spending. Fees are paid by unblinded inputs to prevent DOS attacks. Blinding is done in such a way that double spends are still obvious. If full nodes become expensive to operate in the future then they may become uncommon and this could compromise the security of Bitcoin.

This risk can be reduced if it's made possible for Bitcoin nodes to check all the rules at random and transmit compact proofs of rule violations. If this is done even if there is only one honest full node in the world the system is secure so long as it can communicate to all others.

In general, in any deterministic computation process if you have simple state updates and commit to the sequence of states a compact proof of invalidity can be generated by producing a hash tree fragment to the first invalidate state transition. Ideas in this space have been previously discussed under the banner of proof-of-treachery [1].

Right now not all of the rules can be checked randomly or have compact proofs. SPV header checks — time, target, difficulty, already have them. But if future SPV nodes don't really check all the headers in the future it may be useful to arrange old header times in a merkle mountain range to proofs of sum difficulty and compact proofs of incorrect difficulty.

Proof of invalid script Possible in the current system: Proof is tree fragments for the invalid txn in question as well as one invalid input no need to include more than one.

Could be made more efficient by including commitments to intermediate states, but with the opcode limit all scripts are compact to verify in Bitcoin without doing anything fancier. Also proves nlocktime, etc. To prove output value greater than inputs all inputs must be provided in the proof. Proof of double spend Possible in the current system: Proof is tree fragments for the two transactions which spend the same input. Proof of false inflation Not possible without more data: The coinbase payment is the sum of fees in a block and the subsidy.

Fees require knowing the transaction's inputs output values, to check subsidy you must not only have all the transactions but all their inputs as well. Nodes can randomly check this by grabbing a random txn and checking its inputs, and compactly prove violation by showing where the fees don't match their commitments.

Proof of block too large Similar to false inflation, requires all the transactions, can similarly be solved by including the sum of txn sizes in the tree. Proof of spending a non-existing input Requires additional data: Proof is a pair of tree fragments for the higher and lower records for the missing entry, and another pair for the outputs created within a block but consumed.

I think you can even pull that off as a soft-fork I get your point, sometimes just trust-less is enough I think the big question is do you need the self-modifying code that forth makes possible? IE things like SPV-verifiable colored coins I think it makes most sense when the only pow is in tx's, although exactly what that'd look like is an interesting question I'd still be in favor of improving things generally, e.

What I'd do is just implement a generic snark validation, and providing the snark verification key in the transaction. Though I'm not aware of any way to do that which we'd consider in scope for this discussion. I propose that if our choice operator s are good then a maximally efficient winternitz signature will be completely natural.

The public key is just the root hash over this data. So, is there a way with ECDSA, given three messages pick a pubkey,r,s such that pubkey,r,s is a valid signature of any one of the three messages? I think the most fundemental thing I've discovered is the concepts of how mining can be separated into timestamping and proof-of-publication Is it back in your possession now?

What if that data has been further split into multiple parts with an error correcting code and spread to multiple machines. Now where does the coin reside?

But there is no need that the best analogies need to be physically intutive, in fact basically all of higher mathmatics is about manipulating abstractions which are in no way physically intutive. I think relating to a payments ability to require transferable restrictions on the next transaction. But make the covenants temporary, the coins themselves perishable, or applied to user issued assets not colored coins but separately issued assets a la freimarketsand it is a different story IMHO.

Some of your competition doesn't mind disclosing this however. I think they should just take the scheme we discussed previously and execute it under a ZKP for general programs. It would be similar in size to the zerocash proofs. Verifer does this too. Both prover and verifier get a hash root.

The verifyer verifies the signature and the zkp. But it shouldn't be terrible. I believe it would be cheaper than another sha hash in any case. Or of an encrypted value or. I think not, at least not with the GGPR12 stuff as the arith circuit field size is set by the size of the pairing crypto curve. You could get more elaborate, like timelocking the funds and show that funds beyond the withdraw daily limits are actually unspendable by the network, but perhaps I'm getting to cipherpunk there.

I'm thinking for a merklized AST what makes sense is merklized forth. The forth dictionary concept is perfect for it, and means you have a simple, easy to implement language already used for embedded andother things and bitcoin scripting along with all the usual nice things like editor modes and what not So you've got your parameter stack and return stack, and are thus at the point where you can recreate Bitcoin scripting.

Now the interesting thing to do is add TPM functionality, which means a PCR opcode and stack to allow you to select what you want to consider as the start of the current trusted block of code. Then add an encrypted stack, as expected encrypted with H sec PCR tipand some sort of monotonic counter thing.

That should give you enough to do trusted computing with an extremely stable API, and that API itself can be just AST heads of useful library function calls that may actually be implemented directly in C or whatever rather than the opcodes themselves. I don't know that explicitly supporting that makes sense. Equally, forth is already common in applications, IE spacecraft, where you need relatively bare metal languages with simple frameworks and symantics; note how with forth it's much easier to get to the level where you trust that the code being run is what you actually wrote than, say, C.

Equally, forth is already common in applications, IE spacecraft, where you need relatively bare metal languages with simple frameworks and symantics; note how with forth it's much easier to get to the level where you trust that the Just be clear what the maximum's are for the variou parts of the stack.

Dunno yet what the stack datatype should be, MPI's are nice but there is the subtle issue that it'd be good to have some clear idea of how many operations an operation takes. Of course, really simple would be bit ints and implement everything higher level in forth.

Maybe a merkle mountain range of every value ever associated with a given key? I mentioned to TD earlier today the idea of miners committing to a merkle tree of txids in their mempool, just to prove visibility, you could use that if the commitment included txins being spent.

Appending needs to touch only the "mountain tips", that is the perfect merkle trees already stored, and for n items stored you'll have log2 n trees.

I've got an idea where you'd make transactions have commitments of previous ones with a merkle-mountain-range-like scheme so you could efficiently reference any previous transaction up to the genesis block. This is easiest to understand if transactions can only have linear history, but a dag history is doable too. Anyway, wallet software would receive that history to know the coins are valid, thus pushing validation directly to the users.

Obviously some way of pruning that history is important, SCIP is heavy-weight and complex but could work. So one possible accumulator would be to construct a merkle tree of a bit field with one bit for every integer between 0 and 2 You can prove you added an integer to that set by showing the leaves for an operation updating the appropriate bit, and you can remove an integer with another set of leaves.

More on who make bitcoins 82421

  • Bitcoin robot refund anticipation loans

    Bitcoin wallet online vs offline

  • Mycelia ethereum news

    Buy companion 100 liquid o2

Invest in bitcoin india online

  • Runescape woodcutting bot script status

    Litecoin mining different cards in slidell la

  • Cudaminer dogecoin machine

    Cryobit for sale

  • Bitcoin price index india

    Economic policy journal bitcoin mineral

Bitcoin trading bot and 5 minute binary options rainbow strategy

32 comments Digitex futures commissionfree bitcoin futures trading

Noah's ark youtube video robot chicken

This can be a broad range of topics for improvements. We also have Jonas, an independent Bitcoin Core developer. We also have Andrew Poelstra, who has been core to the crypto work which has been incoporated into Bitcoin, such as libsecpk1 which we recently integrated in the 0. It speeds up Bitcoin validation by x. We also have Joseph Poon, the co-inventor of the Lightning Network paper, and he's running his own Lightning company as well. I am going to jump into this and asking the first question to Jonas.

I was wondering if you could tell us about the risks and challenges with Bitcoin wallets in general. I entered the Bitcoin space in My first thing was that I wanted to build a better UI and better user experience. I thought I could download the software and start using Bitcoin.

It took a couple days before I could use it. I wanted to improve the user experience. It felt like, oh, that's going to I need to change the Core UI because its system like how blocks are going to be validated Then I found out that it was not decentralized, it's not participating in Bitcoin, connecting to a node, not distributing transactions and blocks You are only downloading stuff, like a torrent leecher.

We need to change that model. Not everyone can run a full node because smartphones aren't powerful enough. I want to go in a direction where everyone can run full nodes, connected to their routers perhaps, maybe in your homes or center or whatever you want to call it. And you can still use your smartphone to connect to it, and still do Simple Payment Verification wallets, it could be shared between families, between villages or tribes, so that we really participate in Bitcoin, not just consuming information.

Obviously there's lots of things to change, but we're going in that direction with pruning, verification, I think that's the main Core change t odo to make it more flexible to run a full node. I want to work on a hybrid mode where we can do simple payment verification during bootstrapping a full node. The user experience on the GUI side needs to be changed When people say Bitcoin-QT has a bad user experience, I fully agree. But we need to change the fundamentals first. Andrew, you have been doing some crypto work recently.

At Blockstream also, I should mention we both work for Blockstream. You did some work on confidential transactions and some privacy-enhancing crypto work there. Where do you see that aspect of research going in the future? One project that I have been working on at Blockstream with gmaxwell and sipa is Confidential Transactions. It's a technology we're developing for Bitcoin, sidechains and other blockchains, to improve privacy and censorship resistance.

Lately we have been talking about scaling, right? But that's not the only problem with Bitcoin. There's a problem in Bitcoin that all transactions are public. All the information and all the data in transactions are public. This allows for people to get the full transaction graph. They can infer a lot from this transaction graph shape data. So this is not only bad from a surveillance and privacy perspective, there are many companies that are trying to extract data from us, from the blockchain, which creates a censorship vector.

We worry about centralization of mining, because miners are sort of gatekeepers to which transactions get into the blockchain. So what we have been working on with confidential transactions, is a way to cryptographically hide the amounts of all transactions. The shape of the transaction graph is still visible, but by hiding the amounts, you can hide what the transaction is doing.

If I create a transaction, a standard single payment that has some output some big round number, and then an obvious change output. Hiding the amounts can hide the change address. Someone might pay me, I take the coin they gave me, and I trace back the history and I see a lump sum that happened at midnight on Thursday, that could be how much money they made. If I am their landlord, I might be able to do bad things like raise their rent. Another problem with amounts being exposed is that when you try to merge transactions, there's another technlogy developed by gmaxwell, called coinjoin, where you can combine different transactions.

You take two transactions and try to paste them together. If you have two people wanting to send BTC, normally you would do individual transactions, you would have a bunch of inputs and a bunch of outputs, you would be able to figure out who owns what in thos etransactions.

You can also see the change output. But maybe Mark and I can combine our bitcoin transactions, and then we can break this connection between the owners of the inputs and the owners of the outputs. By hiding the amounts, we allow for combining transactions where we no longer have this clear correlation. Nobody can see the amounts, and now it's just a pile of outputs with no amounts associated. We have done this cryptographically. It's implemented in Elements Alpha. We have tried very hard to make sure the performance is good.

Hopefully this will eventually be implemented in Bitcoin Core. We care a lot about the performance of that, we hope that there is a path for inclusion of this into Bitcoin eventually. We think this will improve censorship resistance a lot. Joseph, I have one question with you. You are a coauthor on Lightning Network paper. With the changes coming to Bitcoin this year, checksequenceverify CSV and segwit, we might have a full implementation of lightning before the end of the year. One of the first replies to Satoshi on a mailing list in was, this was really cool, but I don't think it scales.

With more users and more activity, there might be some difficulty if everyone on the network knows everything. If you buy a cup of coffee on Bitcoin, everyone on the network knows about it. Everyone running a full node knows about it.

Everyone in the world has to process this transaction. If people are buying 10's of thousands of things per second, that's a lot of traffic, and it doesn't make sense. It's like if everyone in the world was on one wifi access point.

You can make that access point faster, but it doesn't solve the problem. The correct way to solve this is to On the Internet, it's messages. Lightning Network uses real Bitcoin transactions using smart contract scripting mechanisms.

There's no overlay network. These are real Bitcoins. At a high level, this works by establishing a two-party fund of Bitcoin, through a Bitcoin transaction.

It's functionally a ledger entry. Alice and Bob both have a ledger entry. With 1 BTC in there, the actual allocation of who owns what is known to both of them, but it's also cryptographically provable using real Bitcoin transaction. They can use the local state to whatever they want in that value. By having multiple of these two-party safes or channels open, they can route payments to anyone inside this network, nearly instantaneously and atomically. There are many interesting use cases because this enables incredibly high volume.

Because if everyone is receiving a message, even getting there, is nigh on difficult or impossible, but with Lightning because it is packet routed, and it's secured using the blockchain, you could potentially have millions to billions transactions. Instead of pay per gigabyte, it could be pay per kilobyte. You can reduce counterparty risk significantly.

It changes the view of commercial activity, you can do pay-per-view on websites, or pay-per-play on video games or things like that. I could go into more detail. I felt like I was going a little bit over, OK. So the ultimate view of the way Lightning works is that it reshapes the view of what the blockchain is.

Instead of viewing it as simply a payment system, if you view it as a smart contracting system which enables the blockchain to act as a dispute mediation system, viewing the blockchain as a judge is a lot more understandable and a lot more powerful.

If you can establish these types of agreements between two parties off-chain, you can conduct a lot more activity and many other types of interesting activity. You can write a lot of legal contracts, individually, rent, employer agreements You can have agreements off-chain, and ultimately you have full confidence that these smart contracts will be enforced, and either of you can go to to the blockchain to enforce it.

It's even better than real-life court, you can't convince the judge, there are established roles in this system. Lightning is an early example of a smart contract system. I think you will see some cool stuff in the coming years. Getting checksequenceverify on the chain, it's very exciting. It's refreshing to see something that was proposed just for, frankly much less interesting use case, just related to sidechain pegs or something, and see it used to create something like Lightning which is much more broad in scope.

I like to look at what kind of small changes can we make that will make drastically huge expansions of use cases and accessibility. One of the things I see soon happening is the aspect of segwit in terms of how it changes how scripting is down.