Solidity 0.7.2 Release Announcement

Solidity v0.7.2 fixes a bug in free functions, which had been introduced with v0.7.1, and adds compiler-generated utility file export. Furthermore, it comes with a considerably broadened language support of the SMTChecker. Important Bugfixes Free Function Overloading Checks Free functions were introduced in the previous release (Solidity v0.7.1). It turned…

Eth 2.0 Dev Update #56 — “Getting ready for Spadina”

Eth 2.0 Dev Update #56 — “Getting ready for Spadina”

Testnet Updates and Hard Fork Capabilities

Spadina testnet release, genesis event rehearsal:

Spadina testnet has been set to genesis on September 29 at 12pm UTC. (Genesis time: 1601380800) This is a dress rehearsal Testnet for the community to practice sending deposits and launching beacon nodes and validator clients starting from genesis. We all know practice makes perfect, so this will help us become familiar with the process as depositing and genesis are the more difficult and risky parts. This is a small scale testnet with 1024 validators to genesis where the main net spec requires 16384 validators to launch. The Testnet is meant to be short lived (~3 days) and after whoever wants to maintain can continuously maintain it.

Some useful links:

Oh! and there’s still time to send your deposits, so don’t miss out on the fun!

Weak subjectivity spec and implementation

Weak subjectivity check is necessary to defend against a problem known as a long range attack. Similar to a genesis block that is finalized and irreversible by definition, a weak subjectivity checkpoint serves the same purpose. A beacon node which uses the weak subjectivity checkpoint will have to ensure that checkpoint is always part of the canonical chain. If a node sees a block conflicting with a weak subjectivity checkpoint, the node will immediately reject the block and exit itself out of the running process.

There has been great analysis done by Aditya Asgaonkar on how long a safe weak subjectivity period should be. For detailed analysis I recommend reading his post. The official eth2 spec repo has a new section on weak subjectivity which will be implemented by the client teams.

In Prysm, we have started our implementation in the following PR. Node operators will be able to try out adding weak subjectivity checkpoint via command line soon. For more background on weak subjectivity, I recommend the following post written by Vitalik

thank you weak subjectivity check for preventing me going on an attacker’s chain! ❤

Upgrades & hard forks in ETH2

Ethereum 2.0 is a phased rollout starting with Phase 0. Each phase requires a critical system upgrade, sometimes referred to as a “hard fork”, in which all participants in the system will switch to a different set of rules or parameters at a specific time. The current specification in phase 0 is set up to support critical system upgrades but does not outline exactly how such an upgrade would be performed. This week, our teammate Preston Van Loon investigates a potential implementation for applying upgrades in phase 0. Check out the one page design document here. In a collaborative effort, Danny Ryan responded in another document here.

Merged Code, Pull Requests, and Issues

Process logs streaming via websockets

We now have the ability to subscribe to logs for our beacon node and validator client via a websocket connection, making it easy to display those logs in dashboards or stream them to other processes! Beacon node logs can be accessed at ws://localhost:8080/logs by default and the validator logs at ws://localhost:8081/logs.

Eth2 Standard API definitions complete + implementation started

Our team member Ivan has been working on integrating the standard API for Eth2 into Prysm in order for our API to align with the official standard. Once all clients align with these standards, tooling could be built that is compatible with every client and integrations become much simpler. This can open the door for more block explorers, community tools, and overall more seamless integrations that use multiple clients. We have finished the definitions and are moving forward with the integration.

Voluntary exits implementation ready

After several weeks of work we are happy to announce that voluntary exits are finally complete. Most of the work concentrated on initiating the exit on the validator’s side and sending it over to the beacon node via a gRPC call. This week the last PR has been merged, the documentation has been released and the first validator has exited (big shout-out to our Discord user @Raphael for sacrificing his validator to help us test the feature on Medalla).

👋 good bye to validator 39048

As usual, we are encouraging everyone to try out the functionality yourself. This will help us improve user experience and find possible implementation bugs. And it will give you the warm fuzzy feeling that your hard-earned GöETH is finally safe and sound.

Test coverage improvements

In a continuous effort to improve the test coverage of critical components we have landed a number of important commits. The most significant update is related to full coverage of initial synchronization state transition functions (see #7320, #7286, and #7285). The updated test suite handles all state transitions that might occur during initial synchronization, which solidifies the codebase even further, before the mainnet. Another important area that has received more tests is related to attestation proposing mechanisms (see #7267 and #7206).

We will keep up extending our quality assurance infrastructure, making sure that all of the critical paths are fully covered.

Prysm web UI beta testing very soon

We know the community really wants a user interface for their eth2 nodes and validators, and we are planning our first round of beta testing the first week of October! Our web UI has been shaping up nicely and includes a lot of useful functionality for those stakers that are more comfortable on the web than interacting with terminal commands. We designed and built the entire interface ourselves, putting our heart into creating something useful for our stakers.

We believe this interface will add a lot more usability to our client, making it a more appealing choice going into mainnet. Some of the features included at launch are:

  • Ability to create a new HD wallet or import keystores
  • Ability to see your validators’ recent performance, wallet details, and more.
  • Ability to monitor your beacon node and validator logs
  • Ability to look at your beacon node’s sync status
  • Ability to see the validator queue and your validators’ places on the queue
  • Ability to monitor global validator participation
  • Back up keys, filter keys, and add more keys to your wallet

Data migration for slashing protection between clients

As part of eth2.0 multi client approach it is important to support easy and safe validator migrations between clients. Making a safe transition of validators between clients consists of both moving public keys safely and moving slashing protection db data between clients. As part of the effort to make the transition safe @sproul from sigma prime team compiled a doc with standardization of the local slashing protection data interchange format. In order to support the full standard we are making changes in our local protection db and adding the import and export features to our codebase. We hope to have the new protection db format running on most nodes in the coming weeks.

Interested in Contributing?

We are always looking for devs interested in helping us out. If you know Go or Angular and want to contribute to the forefront of research on Ethereum, please drop us a line and we’d be more than happy to help onboard you :).

Check out our contributing guidelines and our open projects on Github. Each task and issue is grouped into the Phase 0 milestone along with a specific project it belongs to.

As always, follow us on Twitter or join our Discord server and let us know what you want to help with.

Official, Prysmatic Labs Ether Donation Address


Official, Prysmatic Labs ENS Name


Eth 2.0 Dev Update #56 — “Getting ready for Spadina” was originally published in Prysmatic Labs on Medium, where people are continuing the conversation by highlighting and responding to this story.

Bitcoin on Ethereum Blockchain, fairy tale, or reality? An introduction of tBTC

Randomness: Keep Network and tBTC

There are three projects involved in the functionality of tBTC; Keep, Cross-Chain Group, and Summa. These projects have their specific technical functions that make tBTC a safe platform for users to earn on the Ethereum-based platform with their Bitcoin. Keep’s part involves the provision of the random beacon that is saddled with the responsibility of selecting the signatories to tBTC deposits.

Keep brings different features to the system, and some of them include:

As a way to improve the security of transactions, Keep provides an enhanced privacy framework for users by ensuring that important network components are kept as private as possible. Signing groups will not function except the signing takes place from a private key that is unknown to all, and this also affects the random beacon in the same way. Randomness is a core functionality of Keep, and there would have to be a collusion of the parties in order to ascertain what a user is up to; this collision is impossible because there is a randomness to the selection of signers, and that invariably promotes privacy of transactions.

  • Protection

It is important to note that tBTC is an on-chain framework; however, private information is stored by Keep with the aid of threshold ECDSA, and is kept off-chain. Keep communicates through the Ethereum chain but functions away from it, while the smart contract protocols of both the Keep network and tBTC interact with each other. This is a way to ensure that users’ privacy and earning remain protected.

  • Trustlessness

Trustlessness is assured with tBTC because the platform has a framework of “signers’ groups” that mitigates risks from counterparties. With the facilitation of the signers’ groups, transactions are performed without any form of interference from a middleman.

Primarily, Keep helps maintain the trustlessness needed to keep tBTC functioning effectively with the implementation of the random beacon.

Detecting Ownership Takeovers Using Mythril

Mythril is an analysis tool which uses symbolic execution to find vulnerabilities in smart contracts. Mythril even generates exploits for the vulnerabilities that it finds 🚀. In a previous article, I wrote about Mythril internals and symbolic execution. In this article, I’ll show how I use Mythril to detect Ownership takeover vulnerabilities. I’ll also use Mythril’s new plugin system install and release plugins with ease!
Introduction Out of the box, Mythril comes with several zero-setup detection modules.

The Solidity Underhanded Contest is back!

We’re excited to share that the Solidity Underhanded Contest is finally back! Inspired by the Underhanded C Contest and the first Underhanded Solidity Contest, organized in 2017 by Nick Johnson, we decided it is time for a much needed revival. The goal of this contest is to write innocent-looking Solidity…

eth2 quick update no. 16

Can’t travel these days Miss the people, not the planes Spadina, not Spain tl;dr Spadina (deposit and genesis dress rehearsal) testnet coming up Medalla Data Challenge in progress RFP for audit of blst super fast BLS12-381 signature library 💥 Spadina “dress rehearsal” just around the corner We realize that both…

Coordination, Good and Bad

Special thanks to Karl Floersch and Jinglan Wang for feedback and review

See also:

Coordination, the ability for large groups of actors to work together for their common interest, is one of the most powerful forces in the universe. It is the difference between a king comfortably ruling a country as an oppressive dictatorship, and the people coming together and overthrowing him. It is the difference between the global temperature going up 3-5’C and the temperature going up by a much smaller amount if we work together to stop it. And it is the factor that makes companies, countries and any social organization larger than a few people possible at all.

Coordination can be improved in many ways: faster spread of information, better norms that identify what behaviors are classified as cheating along with more effective punishments, stronger and more powerful organizations, tools like smart contracts that allow interactions with reduced levels of trust, governance technologies (voting, shares, decision markets…), and much more. And indeed, we as a species are getting better at all of these things with each passing decade.

But there is also a very philosophically counterintuitive dark side to coordination. While it is emphatically true that “everyone coordinating with everyone” leads to much better outcomes than “every man for himself”, what that does NOT imply is that each individual step toward more coordination is necessarily beneficial. If coordination is improved in an unbalanced way, the results can easily be harmful.

We can think about this visually as a map, though in reality the map has many billions of “dimensions” rather than two:

The bottom-left corner, “every man for himself”, is where we don’t want to be. The top-right corner, total coordination, is ideal, but likely unachievable. But the landscape in the middle is far from an even slope up, with many reasonably safe and productive places that it might be best to settle down in and many deep dark caves to avoid.

Now what are these dangerous forms of partial coordination, where someone coordinating with some fellow humans but not others leads to a deep dark hole? It’s best to describe them by giving examples:

  • Citizens of a nation valiantly sacrificing themselves for the greater good of their country in a war…. when that country turns out to be WW2-era Germany or Japan
  • A lobbyist giving a politician a bribe in exchange for that politician adopting the lobbyist’s preferred policies
  • Someone selling their vote in an election
  • All sellers of a product in a market colluding to raise their prices at the same time
  • Large miners of a blockchain colluding to launch a 51% attack

In all of the above cases, we see a group of people coming together and cooperating with each other, but to the great detriment of some group that is outside the circle of coordination, and thus to the net detriment of the world as a whole. In the first case, it’s all the people that were the victims of the aforementioned nations’ aggression that are outside the circle of coordination and suffer heavily as a result; in the second and third cases, it’s the people affected by the decisions that the corrupted voter and politician are making, in the fourth case it’s the customers, and in the fifth case it’s the non-participating miners and the blockchain’s users. It’s not an individual defecting against the group, it’s a group defecting against a broader group, often the world as a whole.

This type of partial coordination is often called “collusion”, but it’s important to note that the range of behaviors that we are talking about is quite broad. In normal speech, the word “collusion” tends to be used more often to describe relatively symmetrical relationships, but in the above cases there are plenty of examples with a strong asymmetric character. Even extortionate relationships (“vote for my preferred policies or I’ll publicly reveal your affair”) are a form of collusion in this sense. In the rest of this post, we’ll use “collusion” to refer to “undesired coordination” generally.

Evaluate Intentions, Not Actions (!!)

One important property of especially the milder cases of collusion is that one cannot determine whether or not an action is part of an undesired collusion just by looking at the action itself. The reason is that the actions that a person takes are a combination of that person’s internal knowledge, goals and preferences together with externally imposed incentives on that person, and so the actions that people take when colluding, versus the actions that people take on their own volition (or coordinating in benign ways) often overlap.

For example, consider the case of collusion between sellers (a type of antitrust violation). If operating independently, each of three sellers might set a price for some product between $5 and $10; the differences within the range reflect difficult-to-see factors such as the seller’s internal costs, their own willingness to work at different wages, supply-chain issues and the like. But if the sellers collude, they might set a price between $8 and $13. Once again, the range reflects different possibilities regarding internal costs and other difficult-to-see factors. If you see someone selling that product for $8.75, are they doing something wrong? Without knowing whether or not they coordinated with other sellers, you can’t tell! Making a law that says that selling that product for more than $8 would be a bad idea; maybe there are legitimate reasons why prices have to be high at the current time. But making a law against collusion, and successfully enforcing it, gives the ideal outcome – you get the $8.75 price if the price has to be that high to cover sellers’ costs, but you don’t get that price if the factors driving prices up naturally are low.

This applies in the bribery and vote selling cases too: it may well be the case that some people vote for the Orange Party legitimately, but others vote for the Orange Party because they were paid to. From the point of view of someone determining the rules for the voting mechanism, they don’t know ahead of time whether the Orange Party is good or bad. But what they do know is that a vote where people vote based on their honest internal feelings works reasonably well, but a vote where voters can freely buy and sell their votes works terribly. This is because vote selling has a tragedy-of-the-commons: each voter only gains a small portion of the benefit from voting correctly, but would gain the full bribe if they vote the way the briber wants, and so the required bribe to lure each individual voter is far smaller than the bribe that would actually compensate the population for the costs of whatever policy the briber wants. Hence, votes where vote selling is permitted quickly collapse into plutocracy.

Understanding the Game Theory

We can zoom further out and look at this from the perspective of game theory. In the version of game theory that focuses on individual choice – that is, the version that assumes that each participant makes decisions independently and that does not allow for the possibility of groups of agents working as one for their mutual benefit, there are mathematical proofs that at least one stable Nash equilibrium must exist in any game. In fact, mechanism designers a very wide latitude to “engineer” games to achieve specific outcomes. But in the version of game theory that allows for the possibility of coalitions working together (ie. “colluding”), called cooperative game theory, we can prove that there are large classes of games that do not have any stable outcome (called a “core“) that a coalition cannot profitably deviate from.

One important part of that set of inherently unstable games is majority games. A majority game is formally described as a game of agents where any subset of more than half of them can capture a fixed reward and split it among themselves – a setup eerily similar to many situations in corporate governance, politics and many other situations in human life. That is to say, if there is a situation with some fixed pool of resources and some currently established mechanism for distributing those resources, and it’s unavoidably possible for 51% of the participants can conspire to seize control of the resources, no matter what the current configuration is there is always some conspiracy that can emerge that would be profitable for the participants. However, that conspiracy would then in turn be vulnerable to potential new conspiracies, possibly including a combination of previous conspirators and victims… and so on and so forth.

Round A B C
1 1/3 1/3 1/3
2 1/2 1/2 0
3 2/3 0 1/3
4 0 1/3 2/3

This fact, the instability of majority games under cooperative game theory, is arguably highly underrated as a simplified general mathematical model of why there may well be no “end of history” in politics and no system that proves fully satisfactory; I personally believe it’s much more useful than the more famous Arrow’s theorem, for example.

Note once again that the core dichotomy here is not “individual versus group”; for a mechanism designer, “individual versus group” is surprisingly easy to handle. It’s “group versus broader group” that presents the challenge.

Decentralization as Anti-Collusion

But there is another, brighter and more actionable, conclusion from this line of thinking: if we want to create mechanisms that are stable, then we know that one important ingredient in doing so is finding ways to make it more difficult for collusions, especially large-scale collusions, to happen and to maintain themselves. In the case of voting, we have the secret ballot – a mechanism that ensures that voters have no way to prove to third parties how they voted, even if they want to prove it (MACI is one project trying to use cryptography to extend secret-ballot principles to an online context). This disrupts trust between voters and bribers, heavily restricting undesired collusions that can happen. In that case of antitrust and other corporate malfeasance, we often rely on whistleblowers and even give them rewards, explicitly incentivizing participants in a harmful collusion to defect. And in the case of public infrastructure more broadly, we have that oh-so-important concept: decentralization.

One naive view of why decentralization is valuable is that it’s about reducing risk from single points of technical failure. In traditional “enterprise” distributed systems, this is often actually true, but in many other cases we know that this is not sufficient to explain what’s going on. It’s instructive here to look at blockchains. A large mining pool publicly showing how they have internally distributed their nodes and network dependencies doesn’t do much to calm community members scared of mining centralization. And pictures like these, showing 90% of Bitcoin hashpower at the time being capable of showing up to the same conference panel, do quite a bit to scare people:

But why is this image scary? From a “decentralization as fault tolerance” view, large miners being able to talk to each other causes no harm. But if we look at “decentralization” as being the presence of barriers to harmful collusion, then the picture becomes quite scary, because it shows that those barriers are not nearly as strong as we thought. Now, in reality, the barriers are still far from zero; the fact that those miners can easily perform technical coordination and likely are all in the same Wechat groups does not, in fact, mean that Bitcoin is “in practice little better than a centralized company”.

So what are the remaining barriers to collusion? Some major ones include:

  • Moral Barriers. In Liars and Outliers, Bruce Schneier reminds us that many “security systems” (locks on doors, warning signs reminding people of punishments…) also serve a moral function, reminding potential misbehavers that they are about to conduct a serious transgression and if they want to be a good person they should not do that. Decentralization arguably serves that function.
  • Internal negotiation failure. The individual companies may start demanding concessions in exchange for participating in the collusion, and this could lead to negotiation stalling outright (see “holdout problems” in economics).
  • Counter-coordination. The fact that a system is decentralized makes it easy for participants not participating in the collusion to make a fork that strips out the colluding attackers and continue the system from there. Barriers for users to join the fork are low, and the intention of decentralization creates moral pressure in favor of participating in the fork.
  • Risk of defection. It still is much harder for five companies to join together to do something widely considered to be bad than it is for them to join together for a non-controversial or benign purpose. The five companies do not know each other too well, so there is a risk that one of them will refuse to participate and blow the whistle quickly, and the participants have a hard time judging the risk. Individual employees within the companies may blow the whistle too.

Taken together, these barriers are substantial indeed – often substantial enough to stop potential attacks in their tracks, even when those five companies are simultaneously perfectly capable of quickly coordinating to do something legitimate. Ethereum blockchain miners, for example, are perfectly capable of coordinating increases to the gas limit, but that does not mean that they can so easily collude to attack the chain.

The blockchain experience shows how designing protocols as institutionally decentralized architectures, even when it’s well-known ahead of time that the bulk of the activity will be dominated by a few companies, can often be a very valuable thing. This idea is not limited to blockchains; it can be applied in other contexts as well (eg. see here for applications to antitrust).

Forking as Counter-Coordination

But we cannot always effectively prevent harmful collusions from taking place. And to handle those cases where a harmful collusion does take place, it would be nice to make systems that are more robust against them – more expensive for those colluding, and easier to recover for the system.

There are two core operating principles that we can use to achieve this end: (1) supporting counter-coordination and (2) skin-in-the-game. The idea behind counter-coordination is this: we know that we cannot design systems to be passively robust to collusions, in large part because there is an extremely large number of ways to organize a collusion and there is no passive mechanism that can detect them, but what we can do is actively respond to collusions and strike back.

In digital systems such as blockchains (this could also be applied to more mainstream systems, eg. DNS), a major and crucially important form of counter-coordination is forking.

If a system gets taken over by a harmful coalition, the dissidents can come together and create an alternative version of the system, which has (mostly) the same rules except that it removes the power of the attacking coalition to control the system. Forking is very easy in an open-source software context; the main challenge in creating a successful fork is usually gathering the legitimacy (game-theoretically viewed as a form of “common knowledge“) needed to get all those who disagree with the main coalition’s direction to follow along with you.

This is not just theory; it has been accomplished successfully, most notably in the Steem community’s rebellion against a hostile takeover attempt, leading to a new blockchain called Hive in which the original antagonists have no power.

Markets and Skin in the Game

Another class of collusion-resistance strategy is the idea of skin in the game. Skin in the game, in this context, basically means any mechanism that holds individual contributors in a decision individually accountable for their contributions. If a group makes a bad decision, those who approved the decision must suffer more than those who attempted to dissent. This avoids the “tragedy of the commons” inherent in voting systems.

Forking is a powerful form of counter-coordination precisely because it introduces skin in the game. In Hive, the community fork of Steem that threw off the hostile takeover attempt, the coins that were used to vote in favor of the hostile takeover were largely deleted in the new fork. The key individuals who participated in the attack individually suffered as a result.

Markets are in general very powerful tools precisely because they maximize skin in the game. Decision markets (prediction markets used to guide decisions; also called futarchy) are an attempt to extend this benefit of markets to organizational decision-making. That said, decision markets can only solve some problems; in particular, they cannot tell us what variables we should be optimizing for in the first place.

Structuring Coordination

This all leads us to an interesting view of what it is that people building social systems do. One of the goals of building an effective social system is, in large part, determining the structure of coordination: which groups of people and in what configurations can come together to further their group goals, and which groups cannot?

Different coordination structures, different outcomes

Sometimes, more coordination is good: it’s better when people can work together to collectively solve their problems. At other times, more coordination is dangerous: a subset of participants could coordinate to disenfranchise everyone else. And at still other times, more coordination is necessary for another reason: to enable the broader community to “strike back” against a collusion attacking the system.

In all three of those cases, there are different mechanisms that can be used to achieve these ends. Of course, it is very difficult to prevent communication outright, and it is very difficult to make coordination perfect. But there are many options in between that can nevertheless have powerful effects.

Here are a few possible coordination structuring techniques:

  • Technologies and norms that protect privacy
  • Technological means that make it difficult to prove how you behaved (secret ballots, MACI and similar tech)
  • Deliberate decentralization, distributing control of some mechanism to a wide group of people that are known to not be well-coordinated
  • Decentralization in physical space, separating out different functions (or different shares of the same function) to different locations (eg. see Samo Burja on connections between urban decentralization and political decentralization)
  • Decentralization between role-based constituencies, separating out different functions (or different shares of the same function) to different types of participants (eg. in a blockchain: “core developers”, “miners”, “coin holders”, “application developers”, “users”)
  • Schelling points, allowing large groups of people to quickly coordinate around a single path forward. Complex Schelling points could potentially even be implemented in code (eg. recovery from 51% attacks can benefit from this).
  • Speaking a common language (or alternatively, splitting control between multiple constituencies who speak different languages)
  • Using per-person voting instead of per-(coin/share) voting to greatly increase the number of people who would need to collude to affect a decision
  • Encouraging and relying on defectors to alert the public about upcoming collusions

None of these strategies are perfect, but they can be used in various contexts with differing levels of success. Additionally, these techniques can and should be combined with mechanism design that attempts to make harmful collusions less profitable and more risky to the extent possible; skin in the game is a very powerful tool in this regard. Which combination works best ultimately depends on your specific use case.

Eth 2.0 Dev Update #56— “Road to Mainnet”

🆕 Road to Launch

🔹 Mainnet release public checklist

A public checklist for the eth2 phase 0 mainnet launch has finally been created and released to the public here. If you’re curious about the progress towards mainnet and when we might launch without resorting to speculating about dates, this project board is a great way to do so. To give a more granular perspective of our team’s focus before mainnet:

  • Second security audit
  • Implementing the eth2.0-apis standard in Prysm for client interoperability
  • Wrapping up voluntary exits in Prysm
  • A comprehensive web UI for Prysm!
  • Fuzz testing and resolving important bugs before we go to mainnet
  • Slasher improvements
  • Common slashing protection format for transporting keys between eth2 clients
  • Weak subjectivity sync

Out of these, only a few are features, which means that we can likely perform a feature freeze by mid October, allowing us to only work on security improvements and UX before going live. If all goes well, November is still looking good for a launch from our perspective.

🔹 Audit by Trail of Bits

We are pleased to announce the Prysm project is being audited by security firm Trail of Bits. Going into mainnet, having 2 full code audits for our eth2 client is critical for the safety of our stakers and in order for us to also identify ways of improving our client with code best practices. Having two separate organizations, Quantstamp and Trail of Bits, review our code independently is also beneficial because Trail of Bits can already start looking at the code in the context of the previous audit and identify places where the code may have changed since then or places where it would benefit from further review. In particular, this new audit is focusing a lot on slasher, slashing protection, and core specification attack vectors. For the sake of optimization, sometimes we diverge from the spec in certain places and this audit will help determine the safety of our approach.

📝 Merged Code, Pull Requests, and Issues

🔹 Significant security improvements to denial of service attacks

Ethereum Foundation researcher Protolambda has helped us a lot over the past 2 weeks with security analysis for denial of service in Prysm. He has shared with us multiple places where we failed to perform proper checks on inputs to prevent the node from being overwhelmed by data coming from the outside world. We have been tightening up this code recently with many bug fixes merged in recently that would help prevent any catastrophic scenarios in mainnet. We are looking forward to this upcoming audit to further analyze potential issues.

🔹 Removal of hosted eth1 node support, to closer simulate what mainnet will resemble

As many node operators are aware, Prysmatic Labs has been offering their own hosted eth1 nodes to people running Prysm beacon nodes for a long time now. For participating in the testnet over the previous months, stakers didn’t need to run their own eth1 node because their beacon nodes would connect by default to However, as mainnet gets closer, we will not be hosting eth1 nodes for the public to use. The expectation is that you must either run your own eth1 node, or use a third party provider such as Infura or Alchemy. Running an eth1 node is important if you are running validators because validators include the latest eth1 block root and other information in their blocks to use in a voting process within the beacon chain. To add your own eth1 node, you can follow our instructions in our documentation portal here. As of the last few weeks, Prysm beacon nodes do not connect to our hosted eth1 nodes by default.

🔹 Voluntary exits implemented, with the ability to submit an exit from the Prysm CLI

Our teammate Radek took ownership of the voluntary exits feature in Prysm. We have promised our users for a while that we would allow simple voluntary exits from the command line with Prysm, and we decided to prioritize that as we get closer to mainnet launch. Radek implemented a command: ` validator accounts-v2 exit` which guides stakers through an interactive process in which they can submit an exit to their beacon node. Given exits are irreversible, a lot of steps are in place to ensure users know what they are doing before they successfully complete the process. You can see the implementation here.

🔜 Upcoming Work

🔹 Advanced peer scoring added to our p2p routing

Enabling peer scoring and evaluation in Prysm nodes is an ongoing effort which eventually will result in beacon nodes favoring well-behaved peers, while restricting less useful ones.

The problem is being tackled from two sides: scoring peers’ behavior on an application level invariants, and, then, descending to a lower network level invariants too. Application level scoring allows us to restrict peers based on some higher level scenarios (e.g. restricting peers that happen to return less than average number of blocks consistently during two consecutive epochs). Network level invariants help nodes build a healthy network mesh, based on network performance of surrounding peers.

Application level peer scorer has been added in #6579 and #6709, and is still highly experimental, and therefore will sit behind the ` — dev` flag for quite some time (still once we sort out the network mesh scoring, extending it to the app level is just a matter of injecting our scoring function into GossipSub params list).

When it comes to enabling network level scoring, it was blocked by an issue in the upstream protocol, which has recently got resolved (GossipSub support for dynamically setting topic scoring parameters was merged in just a couple of days ago, see this PR in go-libp2p-pubsub). So, with this update to GossipSub, we have been able to progress with introducing network level peer scoring into our p2p routing. If you wish to follow the development, please, refer to issue #6043, which has a corresponding PR#6943. That PR is still a work in progress, but we hope to merge it into the master in the upcoming weeks.

🔹 Weak subjectivity sync

One of the beautiful aspects of eth2 is the concept of “chain finality”, which means that given the consensus voting rules, there exists a certain checkpoint of blocks in which the chain cannot be reverted at all as defined in the protocol. Proof of work chains can always be reverted if an attacker has enough mining power to force the majority to switch to their chain. However, given how fork choice works in proof of stake along with the rules of consensus, proof of stake defines explicit finality in which the protocol itself makes it impossible to revert past a certain checkpoint.

An obvious example is the genesis block, which by definition is meant to be irreversible and is agreed upon by all participants as the starting point of chain sync. However, given enough time and finality, we can pick another checkpoint which is not the genesis block from which it is safe for nodes to sync while having significant validation of the blockchain. This sort of sync is known as “weak subjectivity sync” and has been subject of much research by the Ethereum Foundation over the past years. Before we go to mainnet, implementing weak subjectivity sync is important to mitigate certain attacks on the chain and also avoid having to hard fork in the future to add such a feature. The official write-up on how weak subjectivity will work on eth2 is located here and our teammate Terence has already started on incorporating this into Prysm.

🔹 Common slashing protection format for client interoperability

Exporting keystores from Prysm and importing them into Lighthouse or vice versa is not enough to protect users during a catastrophe. During the Medalla testnet incident, we saw several validators get slashed when they transitioned from Prysm to another client. The reason this happened was because Prysm implements a slashing protection feature but it is not compatible with other clients and the slashing protection summary does not get exported when a user exports their keystores to another client. As we get closer to mainnet, having this protection become a common format between clients is critical, as well as having clearly documented migration paths between clients that keep users safe. Michael Sproul from the Sigma Prime (Lighthouse) started an initiative for slashing protection compatibility, and our teammate Shay has been working with him and other eth2 implementers to ensure we include this feature in Prysm.

🔹 Implementing the standard, eth2.0-apis in Prysm

Our teammate Ivan has been working on a standard implementation of which has been decided as the REST API standard for eth2 clients. Over the past year, we have maintained our own API under the repository which has powered Prysm through multiple testnets and served block explorers such as and

However, it is really critical all teams align to a standard as much as possible before a mainnet launch. In eth1, the two major clients, geth and parity, had certain huge mismatches in API endpoints which made interoperability difficult and a pain for many node operators, block explorers, and companies. Post mainnet, teams will likely be a lot busier with maintenance and improvements to implement such a radical overhaul of their API. This is why we aim to finish our compatibility with eth2.0-APIs before mainnet launch. Ivan has been working on defining all of these protobuf definitions necessary for the API standard here: and over the course of the coming two weeks we will start implementing them in Prysm. It is important to note we will still support ethereumapis for those who wish to use it.


🔹 Awesome project built for Prysm: Typescript Remote Signer Server!

We want to highlight an awesome project by one of our stakers this past week for Prysm. Sven from our discord server recently published which is a remote signer implementation compatible with Prysm written in Typescript! Remote signers are the most secure kind of wallet setup for anyone participating in eth2 as they completely separate the validating keys from the beacon node via an Internet connection. You can connect Sven’s remote signer to your prysm validator client to perform signing of data and block proposals remotely. For reference, we have a dedicated page in our docs portal towards remote signers here. This page includes all information regarding how a remote signer works, what it takes to build a remote signer, and how to use this as your wallet in Prysm. Check out Sven’s project :).

Interested in Contributing?

We are always looking for devs interested in helping us out. If you know Go or Solidity and want to contribute to the forefront of research on Ethereum, please drop us a line and we’d be more than happy to help onboard you :).

Check out our contributing guidelines and our open projects on Github. Each task and issue is grouped into the Phase 0 milestone along with a specific project it belongs to.

As always, follow us on Twitter or join our Discord server and let us know what you want to help with.

Official, Prysmatic Labs Ether Donation Address


Official, Prysmatic Labs ENS Name


Eth 2.0 Dev Update #56— “Road to Mainnet” was originally published in Prysmatic Labs on Medium, where people are continuing the conversation by highlighting and responding to this story.

Ethereum No-Loss Lottery System: DeFi democratize saving prices and make them more transparent

Times change fast in the crypto and blockchain space. While in the last years many people (inclusive myself) were talking about the added…

Solidity 0.7.1 Release Announcement

Solidity v0.7.1 adds functions at file-level and fixes several small bugs. Notable New Features Functions At File-Level Functions can now be defined at file-level. Such functions are called “free functions” (as opposed to functions bound to a specific contract). Free functions are always internal functions and are meant to replace…

The Stateless Tech Tree: reGenesis Edition

This week we’re revising the Tech Tree to reflect some new major milestones to Ethereum 1.x R&D that are not quite a complete realization of Stateless Ethereum, but much more reasonably attainable in the mid-term. The most significant addition to the tech tree is Alexey’s reGenesis proposal. This is far…

Validated, staking on eth2: #5 – Why client diversity matters

Disclaimer: None of this is meant as a slight against any client in particular. There is a high likelihood that each client and possibly even the specification has its own oversights and bugs. Eth2 is a complicated protocol, and the people implementing it are only human. The point of this…

Trust Models

One of the most valuable properties of many blockchain applications is trustlessness: the ability of the application to continue operating in an expected way without needing to rely on a specific actor to behave in a specific way even when their interests might change and push them to act in some different unexpected way in the future. Blockchain applications are never fully trustless, but some applications are much closer to being trustless than others. If we want to make practical moves toward trust minimization, we want to have the ability to compare different degrees of trust.

First, my simple one-sentence definition of trust: trust is the use of any assumptions about the behavior of other people. If before the pandemic you would walk down the street without making sure to keep two meters’ distance from strangers so that they could not suddenly take out a knife and stab you, that’s a kind of trust: both trust that people are very rarely completely deranged, and trust that the people managing the legal system continue to provide strong incentives against that kind of behavior. When you run a piece of code written by someone else, you trust that they wrote the code honestly (whether due to their own sense of decency or due to an economic interest in maintaining their reputations), or at least that there exist enough people checking the code that a bug would be found. Not growing your own food is another kind of trust: trust that enough people will realize that it’s in their interests to grow food so they can sell it to you. You can trust different sizes of groups of people, and there are different kinds of trust.

For the purposes of analyzing blockchain protocols, I tend to break down trust into four dimensions:

  • How many people do you need to behave as you expect?
  • Out of how many?
  • What kinds of motivations are needed for those people to behave? Do they need to be altruistic, or just profit seeking? Do they need to be uncoordinated?
  • How badly will the system fail if the assumptions are violated?

For now, let us focus on the first two. We can draw a graph:

The more green, the better. Let us explore the categories in more detail:

  • 1 of 1: there is exactly one actor, and the system works if (and only if) that one actor does what you expect them to. This is the traditional “centralized” model, and it is what we are trying to do better than.
  • N of N: the “dystopian” world. You rely on a whole bunch of actors, all of whom need to act as expected for everything to work, with no backups if any of them fail.
  • N/2 of N: this is how blockchains work – they work if the majority of the miners (or PoS validators) are honest. Notice that N/2 of N becomes significantly more valuable the larger the N gets; a blockchain with a few miners/validators dominating the network is much less interesting than a blockchain with its miners/validators widely distributed. That said, we want to improve on even this level of security, hence the concern around surviving 51% attacks.
  • 1 of N: there are many actors, and the system works as long as at least one of them does what you expect them to. Any system based on fraud proofs falls into this category, as do trusted setups though in that case the N is often smaller. Note that you do want the N to be as large as possible!
  • Few of N: there are many actors, and the system works as long as at least some small fixed number of them do what you expect them do. Data availability checks fall into this category.
  • 0 of N: the systems works as expected without any dependence whatsoever on external actors. Validating a block by checking it yourself falls into this category.

While all buckets other than “0 of N” can be considered “trust”, they are very different from each other! Trusting that one particular person (or organization) will work as expected is very different from trusting that some single person anywhere will do what you expect them to. “1 of N” is arguably much closer to “0 of N” than it is to “N/2 of N” or “1 of 1”. A 1-of-N model might perhaps feel like a 1-of-1 model because it feels like you’re going through a single actor, but the reality of the two is very different: in a 1-of-N system, if the actor you’re working with at the moment disappears or turns evil, you can just switch to another one, whereas in a 1-of-1 system you’re screwed.

Particularly, note that even the correctness of the software you’re running typically depends on a “few of N” trust model to ensure that if there’s bugs in the code someone will catch them. With that fact in mind, trying really hard to go from 1 of N to 0 of N on some other aspect of an application is often like making a reinforced steel door for your house when the windows are open.

Another important distinction is: how does the system fail if your trust assumption is violated? In blockchains, two most common types of failure are liveness failure and safety failure. A liveness failure is an event in which you are temporarily unable to do something you want to do (eg. withdraw coins, get a transaction included in a block, read information from the blockchain). A safety failure is an event in which something actively happens that the system was meant to prevent (eg. an invalid block gets included in a blockchain).

Here are a few examples of trust models of a few blockchain layer 2 protocols. I use “small N” to refer to the set of participants of the layer 2 system itself, and “big N” to refer to the participants of the blockchain; the assumption is always that the layer 2 protocol has a smaller community than the blockchain itself. I also limit my use of the word “liveness failure” to cases where coins are stuck for a significant amount of time; no longer being able to use the system but being able to near-instantly withdraw does not count as a liveness failure.

  • Channels (incl state channels, lightning network): 1 of 1 trust for liveness (your counterparty can temporarily freeze your funds, though the harms of this can be mitigated if you split coins between multiple counterparties), N/2 of big-N trust for safety (a blockchain 51% attack can steal your coins)
  • Plasma (assuming centralized operator): 1 of 1 trust for liveness (the operator can temporarily freeze your funds), N/2 of big-N trust for safety (blockchain 51% attack)
  • Plasma (assuming semi-decentralized operator, eg. DPOS): N/2 of small-N trust for liveness, N/2 of big-N trust for safety
  • Optimistic rollup: 1 of 1 or N/2 of small-N trust for liveness (depends on operator type), N/2 of big-N trust for safety
  • ZK rollup: 1 of small-N trust for liveness (if the operator fails to include your transaction, you can withdraw, and if the operator fails to include your withdrawal immediately they cannot produce more batches and you can self-withdraw with the help of any full node of the rollup system); no safety failure risks
  • ZK rollup (with light-withdrawal enhancement): no liveness failure risks, no safety failure risks

Finally, there is the question of incentives: does the actor you’re trusting need to be very altruistic to act as expected, only slightly altruistic, or is being rational enough? Searching for fraud proofs is “by default” slightly altruistic, though just how altruistic it is depends on the complexity of the computation (see the verifier’s dilemma), and there are ways to modify the game to make it rational.

Assisting others with withdrawing from a ZK rollup is rational if we add a way to micro-pay for the service, so there is really little cause for concern that you won’t be able to exit from a rollup with any significant use. Meanwhile, the greater risks of the other systems can be alleviated if we agree as a community to not accept 51% attack chains that revert too far in history or censor blocks for too long.

Conclusion: when someone says that a system “depends on trust”, ask them in more detail what they mean! Do they mean 1 of 1, or 1 of N, or N/2 of N? Are they demanding these participants be altruistic or just rational? If altruistic, is it a tiny expense or a huge expense? And what if the assumption is violated – do you just need to wait a few hours or days, or do you have assets that are stuck forever? Depending on the answers, your own answer to whether or not you want to use that system might be very different.

Samson Mow & Vitalik Buterin / Ethereum #Supplygate / $6 billion locked in DeFi …

One million Korean got blockchain driving licenses, Steem vs Tron, Bitcoin broke $12K…

Aug 19 · 3 min read

Learn about Bitcoin dollar-cost averaging through RoundlyX in 30 seconds.

And don’t forget to use the “CoinCodeCappromo code to get $4 in BTC.

Latest News 📰

Podcasts 💽

Good Reads 📑


The picture says it all 📷

Get published on Coinmonks

If you like to write educational articles on crypto/blockchain space and wanna get published on Coinmonks publication. Just mail me at or DM me Twitter

“If you like to read Coinmonks, You can donate us too.

Get Best Software Deals Directly In Your Inbox

Image for post

Image for postImage for post

ESP: Beyond Grants

Since transitioning into the Ecosystem Support Program from EF Grants, we’ve talked about defining “support” more comprehensively, thinking beyond simple grant funding. But what does a more comprehensive definition of support actually mean? In practice, it means something different for every project, and it starts with a conversation. ESP was…