2 bit frc robotics
34 commentsXfx 7970 litecoin mining settings icons
Announcing World Trade Francs: The Official Ethereum Stablecoin 01st April, Ethereum scalability research and development subsidy programs 02nd January, One of the common questions that many in the crypto 2. What fundamental advantage would an organization have from its management and operations being tied down to hard code on a public blockchain, that could not be had by going the more traditional route? What advantages do blockchain contracts offer over plain old shareholder agreements?
Particularly, even if public-good rationales in favor of transparent governance, and guarnateed-not-to-be-evil governance, can be raised, what is the incentive for an individual organization to voluntarily weaken itself by opening up its innermost source code, where its competitors can see every single action that it takes or even plans to take while themselves operating behind closed doors?
There are many paths that one could take to answering this question. For the specific case of non-profit organizations that are already explicitly dedicating themselves to charitable causes, one can rightfully say that the lack of individual incentive; they are already dedicating themselves to improving the world for little or no monetary gain to themselves.
For private companies, one can make the information-theoretic argument that a governance algorithm will work better if, all else being equal, everyone can participate and introduce their own information and intelligence into the calculation — a rather reasonable hypothesis given the established result from machine learning that much larger performance gains can be made by increasing the data size than by tweaking the algorithm.
In this article, however, we will take a different and more specific route. However, substance S can only be produced by working with [a strange AI from another dimension whose only goal is to maximize the quantity of paperclips] — substance S can also be used to produce paperclips.
We have never interacted with the paperclip maximizer before, and will never interact with it again. Both humanity and the paperclip maximizer will get a single chance to seize some additional part of substance S for themselves, just before the dimensional nexus collapses; but the seizure process destroys some of substance S.
From our point of view, it obviously makes sense from a practical, and in this case moral, standpoint that we should defect; there is no way that a paperclip in another universe can be worth a billion lives. However, the outcome that this leads to is clearly worse for both parties than if the humans and AI both cooperated — but then, if the AI was going to cooperate, we could save even more lives by defecting ourselves, and likewise for the AI if we were to cooperate.
When there is no ability to pre-contract, laws penalize unilateral defection. However, there are still many situations, particularly when many parties are involved, where opportunities for defection exist:.
Of course, in many of these cases, people sometimes act morally and cooperate, even though it reduces their personal situation. But why do they do this?
We were produced by evolution, which is generally a rather selfish optimizer. There are many explanations. One, and the one we will focus on, involves the concept of superrationality.
Consider the following explanation of virtue, courtesy of David Friedman:. Essentially, it is cognitively hard to convincingly fake being virtuous while being greedy whenever you can get away with it, and so it makes more sense for you to actually be virtuous. Much ancient philosophy follows similar reasoning, seeing virtue as a cultivated habit; David Friedman simply did us the customary service of an economist and converted the intuition into more easily analyzable formalisms.
Now, let us compress this formalism even further. In short, the key point here is that humans are leaky agents — with every second of our action, we essentially indirectly expose parts of our source code.
If we are actually planning to be nice, we act one way, and if we are only pretending to be nice while actually intending to strike as soon as our friends are vulnerable, we act differently, and others can often notice. This might seem like a disadvantage; however, it allows a kind of cooperation that was not possible with the simple game-theoretic agents described above. In this case, the agents can adopt the following strategy, which we assume to be a virtuous strategy:.
If two virtuous agents come into contact with each other, both will cooperate, and get a larger reward. If a virtuous agent comes into contact with a non-virtuous agent, the virtuous agent will defect.
Hence, in all cases, the virtuous agent does at least as well as the non-virtuous agent, and often better. This is the essence of superrationality.
Of course, there is a class of individuals who can convincingly pretend to be friendly while actually planning to defect at every moment — these are called sociopaths , and they are perhaps the primary defect of this system when implemented by humans. This kind of superrational cooperation has been arguably an important bedrock of human cooperation for the last ten thousand years, allowing people to be honest to each other even in those cases where simple market incentives might instead drive defection.
Most people in modern civilization have benefited quite handsomely, and have also indirectly financed, at least some instance of someone in some third world country dumping toxic waste into a river to build products more cheaply for them; however, we do not even realize that we are indirectly participating in such defection; corporations do the dirty work for us. The market is so powerful that it can arbitrage even our own morality, placing the most dirty and unsavory tasks in the hands of those individuals who are willing to absorb their conscience at lowest cost and effectively hiding it from everyone else.
The corporations themselves are perfectly able to have a smiley face produced as their public image by their marketing departments, leaving it to a completely different department to sweet-talk potential customers. This second department may not even know that the department producing the product is any less virtuous and sweet than they are. The internet has often been hailed as a solution to many of these organizational and political problems, and indeed it does do a great job of reducing information asymmetries and offering transparency.
However, as far as the decreasing viability of superrational cooperation goes, it can also sometimes make things even worse. This is part of the reason why scams online and in the cryptocurrency space are more common than offline, and is perhaps one of the primary arguments against moving all economic interaction to the internet a la cryptoanarchism the other argument being that cryptoanarchism removes the ability to inflict unboundedly large punishments, weakening the strength of a large class of economic mechanisms.
A much greater degree of transparency, arguably, offers a solution. Individuals are moderately leaky, current centralized organizations are less leaky, but organizations where randomly information is constantly being released to the world left, right and center are even more leaky than individuals are. This is essentially a restatement of the founding ideology behind Wikileaks, and more recently an incentivized Wikileaks alternative, slur. However, Wikileaks exists, and yet shadowy centralized organizations also continue to still exist and are in many cases still quite shadowy.
Decentralized autonomous organizations, as a concept, are unique in that their governance algorithms are not just leaky, but actually completely public. A futarchy maximizing the average human lifespan will act very differently from a futarchy maximizing the production of paperclips, even if the exact same people are running it.
Now, what would superrational cooperation using DAOs look like? First, we would need to see some DAOs actually appear. There are a few use-cases where it seems not too far-fetched to expect them to succeed: They cannot ever do anything but perhaps adjust a few of their own parameters to maximize some utility metric via PID controllers , simulated annealing or other simple optimization algorithms.
Hence, they are in a weak sense superrational, but they are also rather limited and stupid, and so they will often rely on being upgraded by an external process which is not superrational at all. DAOs with a governance algorithm capable of making theoretically arbitrary decisions. Futarchy , various forms of democracy, and various forms of subjective extra-protocol governance ie.
Once DAOs can make arbitrary decisions, then they will be able to not only engage in superrational commerce with their human customers, but also potentially with each other. What kinds of market failures can superrational cooperation solve that plain old regular cooperation cannot? Public goods problems may unfortunately be outside the scope; none of the mechanisms described here solve the massively-multiparty incentivization problem.
With public goods, the whole problem is that there is no way to exclude anyone from benefiting, so the strategy fails. However, anything related to information asymmetries falls squarely within the scope, and this scope is large indeed; as society becomes more and more complex, cheating will in many ways become progressively easier and easier to do and harder to police or even understand; the modern financial system is just one example.
Perhaps the true promise of DAOs, if there is any promise at all, is precisely to help with this. Try to determine if the other party is virtuous. If the other party is virtuous, cooperate. If the other party is not virtuous, defect. This brought it back. To prevent this would require the repayment code to have control over any possible expenditure, at which point we hit the Halting Problem and simple practicality almost instantly.
Seems a better line than attempting to be rid of Government all together. We think of transparency as always being good but if that was the case why was there a need to institute the secret ballot? So transparency is good in the sense of accountability but it also makes the transparent subject vulnerable to bribes and to intimidation.
Seems like the ideal solution is to be transparent in the execution of policy and to be secret in the process in which policy is created. The smoke-filled back room of secrecy is something people hate but people also hate the fact that lobbyists wield a lot of power relative to the money they spend to influence elections.
If you want people to vote their conscience free from intimidation and from bribes you need to shield them from the retribution which might result if they choose to vote contrary to the interests of those who have money or power. In this way they actually desire something that is contrary to their best interest. So I think this article would be really helpful if it could differentiate between policy creation and policy execution.
For the latter transparency is highly desired and for the former transparency can result those citizens with the most money and most power to lobby aka bribe and intimidate deciding how policy is created.
There is some interesting academic work on some of theses issues and I apologize for not giving here citations to some of them.. Hence the big difference in behavior. Space faring people Astronauts, Cosmonauts etc. Second, many claim that human decision making is irrational by definition. The decision itself is being made first, and the rational applied later to explain and support it.
DAOs are definitely the future, for countless reasons. We already created airplanes, cell phones and genetically modified chickens, so how hard can that be?! I wonder if you could continue the conversation on email? I think Superrationality as you said is matter of scale and this is must interesting parameter in human development when we are thinking about globalization.
Now this is my next question, how reduce the nefast effect of globalization for instance to protect our planet of ecological disaster? Is human alone is able of super Superrationality at a large scale or he needs to have a DAO to keep track of this progress? Full transparency is the only option! Privacy Is The Enemy. Is there anything more out there on this? I noticed you linked to all the other stuff. Thanks in advance — James. Few things I missunderstand: You may use these HTML tags and attributes: Superrationality and DAOs Introduction.
The Official Ethereum Stablecoin 01st April, Ethereum scalability research and development subsidy programs 02nd January, The payoff matrix is as follows: Humans cooperate Humans defect AI cooperates 2 billion lives saved, 2 paperclips gained 3 billion lives, 0 paperclips AI defects 0 lives, 3 paperclips 1 billion lives, 1 paperclip.