What's Really Driving the Cryptocurrency Phenomenon?


via ic


In this paper, we introduce investors to a decades-old subculture of eccentric software-makers who resist the oppressive and ethically-fraught traditions of corporate employment. We encounter how they set out in the 1980s to make commercial software irrelevant, and how their mission expanded into a war against all forms of institutional oversight. We examine their approach to organizing volunteer software production in service of this war, and how their methods produced successful software. We present Bitcoin as the next logical innovation in volunteer-based software development: an ad hoc human coordination machine, which uses unpaid, unplanned contributions in lieu of a salaried workforce. We will examine how a volunteer-based system can resolve moral hazards endemic to software infrastructure development in a commercial setting, if the participants in the system adhere to a strict set of rules. We look at how a distributed network of machines is used to enforce and maintain rules setup for human participants, even if those participants hold key roles in developing the system software. Finally, we consider the cost savings achieved in a system built with volunteer labor, and how the economics of these “permissionless blockchains” might undermine the value proposition of full-time software employment. We relate this outcome to the original goal of the software makers to make institutional software uncompetitive, and examine who will be caught in the crossfire. As a coda, we ask: what is the larger socio-economic impact of systems like Bitcoin, and who benefits?


Timeline to Bitcoin

  • 1904: The Veblenian Dichotomy distinguishes between institutions and their technologies.
  • 1918: Taylorism, the original “management science,” is first documented by HB Drury.
  • 1934: “Fordism” management style gains prominence, efficient and oppressive.
  • 1937: Ronald Coase publishes “Theory of the Firm,” the economic rationale for why firms grow.
  • 1956: Government antitrust suit against AT&T; it is barred from entering the computer business.
  • 1956: Hacker movement emerges at MIT and Stanford.
  • 1964: The National Society of Professional Engineers publishes code of ethics.
  • 1968: The poem “All Watched Over by Machines of Loving Grace” emblematic of tech-utopianism.
  • 1969: IEEE.22, The Union of Concerned Scientists is formed at MIT.
  • 1971: Prof. John Galbraith coins the term “the Technostructure” for business bureaucracy.
  • 1974: DARPA develops Internet protocol suit.
  • 1981: Writer William Gibson coins the term “cyberspace” to mean a digital dystopia where corporations rule.
  • 1982: AT&T sued by the Department of Justice for antitrust violations and broken up.
  • 1983: Richard Stallman releases GNU/Linux, a free OS.
  • 1983: Computer Professionals for Social Responsibility creates code of ethics for cryptographers.
  • 1983: David Chaum creates centralized digital cash system.
  • 1984: IBM and AT&T begin using Internet protocol suite.
  • 1984: William Gibson publishes “Neuromancer,” popularizing the idea of “The Matrix.”
  • 1985: Richard Stallman founds Free Software Foundation in protest of commercial software practices.
  • 1985: GM experimented with shared ownership of one of its car companies, Saturn.
  • 1989: World Wide Web launches with the Hypertext Transfer Protocol, or HTTP.
  • 1990: Electronic Frontier Foundation (EFF) is formed.
  • 1990: Linked timestamping proposed by Haber and Stornetta.
  • 1991: Ronald Coase wins Nobel Prize in Economics for his work in 1937 and 1960.
  • 1991: The term “New Jersey style” is popularized by "The Rise of 'Worse is Better’.”
  • 1992: Intel Chief Scientist Tim May publishes the Crypto-Anarchist Manifesto.
  • 1992: Cypherpunks Mailing List starts, attracting people like Julian Assange and Satoshi Nakamoto.
  • 1993: Cypherpunks Manifesto published.
  • 1995: Richard Barbrook publishes “The Californian Ideology.”
  • 1996: “Declaration of Independence of Cyberspace” published by John Perry Barlow.
  • 1996: The open source movement emerges as a marketing campaign for free software use in business.
  • 1997: Eric Raymond presents “Cathedral versus Bazaar,” an ode to open source development.
  • 1997: Adam Back invents Hashcash, a denial of service protection mechanism for P2P networks.
  • 1998: Wei Dai publishes B-money proposal.
  • 1999: Freenet launches, a censor-resistant document store and networking suite.
  • 2000: Microsoft Windows Chief Jim Allchin calls open source “an intellectual property destroyer."
  • 2001: Steve Ballmer calls Linux “a cancer.”
  • 2001: Mac OS X launches, based on free and open source Unix variant OpenBSD.
  • 2001: Agile Development methodology launches, bringing hacker operational patterns to business.
  • 2005: Nick Szabo suggests a “distributed title registry” or ledger as a common resource.
  • 2009: Satoshi Nakamoto publishes the Bitcoin whitepaper.
  • 2012: Microsoft integrates Linux into its enterprise Azure platform.
  • 2014: Bitcoin price rises. William Shatner astutely notices that Bitcoin has become money for “cyber snobs.”
  • 2016: CME launches Bitcoin price index.
  • 2017: Bitcoin futures begin trading on CME and CBoE.
  • 2018: Morgan Stanley and Goldman Sachs announce they will trade Bitcoin.

Section I

What’s Wrong With The Cryptocurrency Boom?

On the challenges of evaluating cryptocurrency for investors and portfolio managers.

“To me, it’s just dementia. It’s like somebody else is trading turds and you decide you can’t be left out.”

— Charlie Munger on cryptocurrency, May 5, 2018

Cryptocurrencies have made headlines, despite some obvious contradictions. These contradictions include:

  • No clear utility, despite the enthusiasm. There is over $200 billion of USD value held in cryptocurrency, spread across 2.9 - 5.8 million Internet users worldwide.[2] It is hard to apprehend a clear use for them, but enthusiasts boast about their long term value.
  • Hated by exactly half of Wall Street. Bitcoin is condemned with vigor by traditional investors like Warren Buffett, who said “[Bitcoin] is rat poison, squared,” and Chase Bank CEO James Dimon, who called it “a fraud.” Yet it has been been embraced by high-tech heavyweights like Jack Dorsey, Peter Thiel, and ICE; banks including Goldman Sachs and Morgan Stanley have announced cryptocurrency desks.
  • Dominated by a single IPO. The only notable public offering to come from the cryptocurrency industry has been Bitmain, a three-year-old company that makes Bitcoin mining hardware. Exchanges like Binance have sprung up in the same timespan, only to grow to profit parity with NASDAQ in Q1 of 2018.[3]
  • Copied by the world’s brightest entrepreneurs. Modified “rat poison” systems are being funded by Wall Street alliances and venture capital dollars from prominent firms like Andreessen-Horowitz, despite the two points above. $6.3B was raised in token offerings in Q1 2018 alone.[4] Facebook and Google both have blockchain divisions.[5] [6]
  • Fraud aplenty, but no killer apps. Mainstream computer scientists say Bitcoin is a step forward in their field, bringing together 30 years of prior work on anti-spam and timestamping systems.[7] [8] There remains no “killer app” in sight, but the SEC has subpoenaed no fewer than 17 cryptocurrency sellers, issuers, and exchanges since 2013 for using the technology to defraud investors.[9]
  • Massive popularity in troubled emerging economies. Bitcoin has hit all-time-highs in price and trading volume in struggling economies in South America such as Venezuela, Colombia, and Peru.[10] [11] [12]

How should investors make sense of these contravening narratives?

Obstacles to understanding cryptocurrency

IT systems is a $3.7 trillion dollar industry worldwide.[13] As we will show, commercial software companies compete directly with free-to-license software systems such as Bitcoin, and have strong incentive to try to reframe their utility in order to make their proprietary systems appear better.

Bitcoin, and many copycat cryptocurrencies, combine a series of previous innovations in cryptography and computer science to form fully-featured digital currency systems, which have different properties from the currency systems in wide use today.[14] Transaction records are held in “triple entry,” by both participants and the network itself; changing the network’s record would take an enormous amount of computing power and capital.

Bitcoin’s “immutable” append-only data structure (colloquially called the “blockchain” or “distributed ledger”) has been kidnapped into the pantheon of enterprise technology fads along with jargon like “cloud,” “mobile,” and “social,” with enterprise software marketing downplaying its original use-case in currency systems, promulgating instead its virtues in niche, segmented commercial use-cases.

Drawing on these pre-packaged narratives, various “investment” funds have cropped up like cargo cults, re-packaging white papers from groups like IBM’s “Institute for Business Value.” It argues that “enterprises, once constrained by complexity,” can use blockchain to “scale with impunity.”[15] It sees blockchains as useful for transactions between institutions, promising “the tightening of trust” and “super efficiency.”[16] Many of these investment advisors seek to launch individual “tokens” or “crypto-assets” for privately-operated networks, designed for niche enterprise “needs.”

We will show that cryptocurrency is the result of a retaliatory movement against the “impunity” of large “trusted” institutions. Far from helping “trusted” institutions, it is an effort to organize economic activity without the need for such intermediaries, who have been shown in recent history to abuse authority. Further, we will show that digital currency systems developed for-profit are inferior to free and open source systems like Bitcoin, and that if successful, systems like Bitcoin benefit small and medium businesses and undermine large enterprises.

Uncomfortable questions about Bitcoin’s creator

The creator of Bitcoin, Satoshi Nakamoto, was solving a very particular problem when he or she designed a blockchain-based currency. Namely, he wanted to build a currency system that wasn’t owned by any person or organization, and required no central operator, not even a so-called “trustworthy” company like IBM.

On November 7, 2008 he wrote to a cryptography mailing list that with Bitcoin, “…we can win a major battle in the arms race and gain a new territory of freedom for several years. Governments are good at cutting off the heads of a centrally controlled network like Napster, but pure P2P [peer-to-peer] networks like Gnutella and Tor seem to be holding their own.” [17] [18]



Figure 0: Distributed (left) and centralized (right) network architectures.
(Credit: Wikimedia)

Who is “we,” and why is there an arms race over cryptographic network technologies? Nakamoto expects the reader to know the context. On June 18, 2010, Nakamoto tells the Bitcointalk forum that he has been working on Bitcoin since 2007, and that the peer-to-peer aspect was his biggest breakthrough: “at some point I became convinced there was a way to do this without any trust required at all,” he says, “and [I] couldn’t resist to keep thinking about it.”[19]

In earlier digital currency experiments, counterfeiting was a common problem, but so was reliability. Participants in the system had to trust that the central issuer of the digital currency was not inflating the supply, and that its systems wouldn’t fail, losing transaction data.[20] Nakamoto believed that Bitcoin would be most useful as a peer-to-peer network wherein the participants in the network could operate ad hoc, without knowing one another’s real names or locations, and “without any trust” between them. This, he believed, would create a network where participants could operate privately, and could not be shut down by regulating or bankrupting a central operating group.

The system Nakamoto built was more than a proof of concept. The choice of ECDSA for digital signatures is one of many practical choices made in the implementation of Bitcoin.[21] In the same post on June 18, 2010, about a year and a half after the network’s launch, Nakamoto said: “Much more of the work was designing than coding. Fortunately, so far all the issues raised have been things I previously considered and planned for.”[22]

Nakamoto pictured that Bitcoin was destined for either mass success or abject failure. In a post on February 14, 2010 to the Bitcointalk forums, the creator of Bitcoin wrote: “I’m sure that in 20 years there will either be very large [Bitcoin] transaction volume or no volume.”[23]

Nearly a decade into Bitcoin’s operation, it now transacts $1.3 trillion of value per annum, more dollar volume than PayPal.[24] This is a significant feat by the standards of Bitcoin’s creator, and by the creators of its predecessors, and yet portfolio managers have not developed strong explanations for its meaning and impact.

What’s wrong with current investment narratives

Bitcoin was one of many experiments in independent digital currency systems, but the first which has produced a valuable, widely-traded asset. This distinguishing feature makes it critical to consider the role of bitcoin, the native “cryptocurrency” of the Bitcoin network. (Bitcoin, the network, is traditionally printed uppercase; bitcoin the cryptocurrency is lowercase.)

Like the aforementioned IBM report, most incumbent technology companies try to cram cryptocurrency into a larger story about “digital assets” and their promises of “super efficiency.” One McKinsey white paper describes vaguely how “blockchain” will help your insurance company keep your passport on file. [25] These incoherent stories typically place cryptocurrency into one of several pre-existing sectors:

Enterprise software. In which blockchain technology is analyzed through a venture capital lens, despite the fact that the most widely-used cryptocurrency protocols are classified as “foundational” not “disruptive” technologies, and are free software.[26]
Capital markets. There is a movement to “tokenize everything” from debt to title deeds. However, these assets are already highly digitized, so this amounts to suboptimization.[27]
App economy. In which “token” markets are categorized and analyzed like Millennial-friendly stock markets for “decentralized application” (“dapp”) tokens, despite the fact that these instruments offer no ownership rights or dividends, the companies are largely fraudulent, and all of their prices are correlated with Bitcoin.

These three misleading narratives create problems for investors, who can see the asset class growing, yet cannot find a sensible explanation. Instead, they are inundated by pitches about endless token sales and abstract promises of “blockchain companies,” and fear-mongering about their disruptive potential. Any temptation to invest in these schemes should be tempered by three obvious facts:

Over half the asset class is one product, Bitcoin, a currency system which is still not widely understood by institutions or the retail public.
This product is an ownerless currency, yet most “blockchain companies” are not building general-use currency systems, but far more niche systems for businesses.
Bitcoin has not been exceeded in use or market cap by any of these subsequent systems, public or private, even after thousands of attempts.

Explanations of Bitcoin’s promise have lacked the requisite context needed by investors. Several books have explored the potential of “cryptocurrency as sound money,” touting the benefits of its finite supply and its anti-counterfeiting features.[28] [29] [30] But the motivations of the participants who create these systems are rarely discussed.

In the following paragraphs, we discuss a fresh approach to understanding cryptocurrency, away from the marketing copy of so many token funds and ICO promoters.

New qualitative approaches are needed

Many useful quantitative studies have been done on blockchain and cryptocurrency, presenting data on the number of wallets in use, currency flows, transaction throughput, and price action, as in studies by Cambridge University and the World Economic Forum.[31] [32] However, these studies stop short of explaining why the pursuit of a functional cryptocurrency was interesting to technologists in the first place. What behaviors, exactly, are these systems enabling?

When behavioral phenomena are driven by the promise of new territory or industry, the kind of “territory of freedom” alluded to by Satoshi Nakamoto in his or her letters, the promise of such territory can be hard to measure empirically. Roger Martin, dean of the Rothman School of Management, argues that “the greatest weakness of the quantitative approach is that it decontextualizes human behavior, removing an event from its real-world setting and ignoring the effects of variables not included in the model.”[33]

Several pertinent questions can lead us in the right direction: [34]

  1. Framing the problem as a phenomenon:
  2. “What’s wrong with the cryptocurrency boom?”
  3. Collecting information about key participants:
  4. “What is the historical background behind the phenomenon?”
  5. “Why is it emerging now?”
  6. Finding patterns and insights:
  7. “How do the key participants organize themselves?”
  8. “Where have they been successful, and how do their tactics work?”
  9. Hypothesizing about potential impact:
  10. “Where does value accrue?”
  11. “Where should investors allocate?”

This essay is intended as a high-level primer for investors, to answer these questions and more. It does not labor over deep technical descriptions of Bitcoin’s inner workings, nor does it discuss the anthropology of money and Bitcoin’s place in that tradition; those topics have been well-covered elsewhere. Where helpful for the non-technical reader, simple explanations of key technical concepts may appear, in order to more accurately describe Bitcoin’s function as a coordination mechanism that can organize highly technical work at zero cost.

PrefaceSection II

Foot Notes

[1] “Munger on Cryptocurrency Trading: ‘To Me It’s Just Dementia,’” CNBC, May 07, 2018. https://iterative.tools/2LQ7wJO.

[2] Hileman, Garrick, and Michel Rauchs. “2017 Global Cryptocurrency Benchmarking Study.” SSRN Electronic Journal, 2017. Page 10. https://iterative.tools/2N0bPCR.

[3] Binance Equaled Nasdaq Profit in Q1, UseTheBitcoin.com, https://iterative.tools/2OMrxmI.

[4] $6.3 Billion: 2018 ICO Funding Has Passed 2017’s Total, CoinDesk. https://iterative.tools/2O3SJQZ.

[5] Liao, Shannon. “Facebook Is Creating a Mysterious Blockchain Division.” The Verge, May 08, 2018. https://iterative.tools/2MLmSVk.

[6] Kharif, Olga and Bergen, Mark. “Google Is Working on Its Own Blockchain-Related Technology,” Bloomberg.com. https://iterative.tools/2LU6Va6.

[7] Narayanan, Arvind, Joseph Bonneau, Edward Felten, Andrew Miller, and Steven Goldfeder. Bitcoin and Cryptocurrency Technologies: A Comprehensive Introduction. Princeton University Press, 2016. https://iterative.tools/2N0y5N1.

[8] “Bitcoin’s Academic Pedigree.” Research for Practice: Cryptocurrencies, Blockchains, and Smart Contracts, ACM Queue. https://iterative.tools/2O8HZk9.

[9] “Cyber Enforcement Actions.” SEC.gov. June 20, 2017. https://iterative.tools/2LR6dKt.

[10] https://coin.dance/volume/localbitcoins/VES

[11] https://coin.dance/volume/localbitcoins/COP

[12] https://coin.dance/volume/localbitcoins/PEN

[13] “Gartner Says Global IT Spending to Reach $3.7 Trillion in 2018.” Hype Cycle Research Methodology, Gartner Inc., https://iterative.tools/2oCcfWb.

[14] “Bitcoin’s Academic Pedigree.” ACM Queue. https://iterative.tools/2O8HZk9.

[15] “Fast Forward: Rethinking Enterprises, Ecosystems and Economies with Blockchains.” Blockchain, the next Disruptor for Finance. 2018. https://iterative.tools/2xAuEHJ. Page 1-2.

[16] Ibid.

[17] Nakamoto, Satoshi. “Re: Bitcoin P2P E-cash Paper.” The Mail Archive. https://iterative.tools/2PWZZvc.

[18] “Peer-to-peer (P2P) computing or networking is a distributed application architecture that partitions tasks or workloads between peers. Peers are equally privileged, equipotent participants in the application. They are said to form a peer-to-peer network of nodes. Peers make a portion of their resources, such as processing power, disk storage or network bandwidth, directly available to other network participants, without the need for central coordination by servers or stable hosts. Peers are both suppliers and consumers of resources, in contrast to the traditional client-server model in which the consumption and supply of resources is divided.” “Peer-to-peer,” Wikipedia. August 27, 2018. https://en.wikipedia.org/wiki/Peer-to-peer.

[19] “Transactions and Scripts: DUP HASH160 … EQUALVERIFY CHECKSIG.” Claymore’s Dual Ethereum AMD NVIDIA GPU Miner V11.7 (Windows/Linux). https://iterative.tools/2xBHxS2.

[20] “A Brief History of Digital Currencies,” Strategic International Management Academic Library, https://iterative.tools/2QZgbNZ.

[21] “Why Was ECDSA Chosen over Schnorr Signatures in the Initial Design?” Bitcoin Stack Exchange, https://iterative.tools/2wFsia8.

[22] “Transactions and Scripts: DUP HASH160 … EQUALVERIFY CHECKSIG.” Claymore’s Dual Ethereum AMD NVIDIA GPU Miner V11.7, https://iterative.tools/2xBHxS2.

[23] “What’s with This Odd Generation?” Claymore’s Dual Ethereum AMD NVIDIA GPU Miner V11.7. https://iterative.tools/2NBpqFP.

[24] “Bitcoin Transaction Value Reaches $1.3T As It Passes PayPal and Discover.” XBT.net. August 30, 2018. https://iterative.tools/2NLwOuo.

[25] “The promise of blockchain,” McKinsey, https://iterative.tools/2CIFz7z, 2017, Page 3.

[26] Lakhani, Marco, and Iansiti, Karim R., “The Truth About Blockchain.” Harvard Business Review. March 06, 2018. https://iterative.tools/2OP45W5.

[27] Watkins, Thayer, “Suboptimization,” https://iterative.tools/2QVL6uk.

[28] Don Tapscott and Alex Tapscott. “Realizing the Potential of Blockchain: A Multistakeholder Approach to the Stewardship of Blockchain and Cryptocurrencies,” 2017. https://iterative.tools/2Dt1qjs.

[29] Ammous, Saifedean. The Bitcoin Standard: The Decentralized Alternative to Central Banking. John Wiley & Sons, 2018. https://iterative.tools/2OQVu54.

[30] Popper, Nathaniel. Digital Gold: The Untold Story of Bitcoin. Penguin Books, 2016. https://iterative.tools/2N0vFhs.

[31] Hileman, Garrick, and Michel Rauchs. “2017 Global Cryptocurrency Benchmarking Study.” SSRN Electronic Journal, 2017. Page 10. https://iterative.tools/2N0bPCR.

[32] Thesis (Realizing the Potential of Blockchain), World Economic Forum, June 2017. https://iterative.tools/2Pv77yO.

[33] Madsbjerg, Christian, and Mikkel B. Rasmussen. The Moment of Clarity: Using the Human Sciences to Solve Your Hardest Business Problems. Harvard Business Review Press, 2014. Page 42. https://iterative.tools/2xOoDqn.

[34] Ibid, page 109.


Section II

Historical Background On The Phenomenon

Using context to understand why hackers set out to build digital currency systems.

“Corporations have neither bodies to be punished, nor souls to be condemned; they therefore do as they like.”

— Edward Thurlow, Lord Chancellor of Great Britain, 1778-1792.[35]

Satoshi Nakamoto was the first participant in his own network, and left a message within the very first “block” of data produced by Bitcoin. The message within this so-called Genesis Block read:

Figure 1. The message left by Satoshi Nakamoto in Bitcoin’s Genesis Block.
(Credit: Reddit)[36]

The original headline appears in the British paper The Times (see figure below). The inclusion of this note is a source of widespread confusion.

Given what we know about Nakamoto’s motivation to create a free economic space outside the purview of institutional oversight, it would seem that this message makes light of the sympathetic relationship between politicians and central bankers. Many people use this allusion to infer that Bitcoin was purpose-built as some kind of disruptor or destroyer of central banks. Taken this way, the headline would seem to be a statement of superiority or self-righteousness.

We suggest that this is a mischaracterization. If Bitcoin does evolve into a large-scale alternative currency system, then Nakamoto’s use of The Times headline will strike historians as timely, but it is more than just a political statement.

Figure 2. The headline reproduced in the Genesis Block.
(Credit: Twitter)

In fact, putting a headline in the Genesis Block has a second, more practical purpose: it serves as a timestamp. By reproducing the text from that day’s paper, Nakamoto proved that the first “block” of data produced by the network was indeed made that day, and not prior. Nakamoto knew Bitcoin was a new kind of network that prospective participants would scarcely believe was real. At the outset, it would be important to send a signal of integrity to people who might join. Getting volunteers to value the project was top priority, indeed a far higher priority than mocking central bankers.

For investors outside the technology industry, understanding this volunteer-based way of working is critical to understanding why Bitcoin operates the way it does, and why it is an improvement on conventional methods of human collaboration. To get to these points, we will first explore the origins of the “war” that Satoshi is engaged in, and how the invention of Bitcoin is meant to change the tide.

The old friction between technologists and management

For the last 50 years, corporate technology companies are increasingly at odds with the engineers that build their critical systems. Recent headlines tell the story: at Microsoft, Amazon, and Salesforce, employees protested contracts with Customs and Border Patrol and ICE.[37] [38] At Google, employees protested the company’s Project Maven AI contracts for the Department of Defense, which promised to increase the accuracy of drone strikes; it bowed out from Project Maven, but has said it will continue to work with the US military in other projects.[39] [40] Google’s announcement that it would agree to censor search results inside China drew 1400 workers to protest.[41] Microsoft is facing a lawsuit by two employees who may have suffered PTSD after seeing child pornography as part of “content moderation” roles.[42] YouTube employees describe their jobs as a “daily hell of ethics debate.”[43] Facebook has experienced protests for the gentrification wrought by its tens of thousands of employees, as well as more recent protests for its “intolerant” political culture.[44] [45]

Other abuses of technological systems include the personal data leak at Equifax, and the abuse of account-creation privileges within the Wells Fargo bank computer system, where accounts were opened and cards issued—in some cases, with forged signatures—in service of sales goals.[46] [47] The worst example of abusive corporate software systems might be the maker of the automated sentencing software employed by some court systems, called COMPAS, which has been shown to recommend prison terms based on the convict’s race.[48]

Tensions between software developers and their employers have spilled out of Silicon Valley and into mainstream news. “This engineer’s lament is a microcosm of a larger trend sweeping across the Peninsula” of San Francisco, reported Vanity Fair in August of 2018:[49]

“In Silicon Valley’s halcyon days, employees didn’t have any qualms about the ethics of the companies they were joining since many honestly believed that they were going to advance a corporation that was going to—yes—change the world. The people who helped transform the Bay Area into the greatest wealth-generation machine in human history—and themselves into millionaires and billionaires in the process—are now turning their backs on the likes of hegemonic corporations who, in their own depictions, moved fast and broke things without an end in sight.”

The article quotes an anonymous Uber executive who fears that ethical issues will motivate engineers to leave en masse: “If we can’t hire any good engineers, we’re fucked.”

This is a liminal moment in business, where the “good engineers” suddenly have leverage over the wealthy and elite management of some of the largest corporations in the history of the world. This development did not arrive overnight; it has its origins in a tension that originated decades ago.

Next, we will look at how the balance of power shifted, and how Bitcoin tips the scale further for the “good engineers.” To appreciate how the software engineers got their leverage, we must begin in the early 20th century, and learn how managers and engineers got to be at odds in the first place.

The emergence of the corporate institution (1900-1929)

The study of human behavior in a business context has a rich tradition. Perhaps the first person to take a meaningful step forward in this discipline was Frederick Winslow Taylor. “Taylorism,” his conception of management science, was all about rational planning, reducing waste, analyzing data, and standardizing best practices.[50] Business owners used these techniques to drive workers uncommonly hard. Andrew Carnegie obsessed over worker productivity, becoming so frustrated with the Homestead Strike of 1892 that he hired a private police force to have picketing workers shot.[51]

Thorstein Veblen was a Norwegian-American economist who published his seminal study of practitioners of management science in 1904. He created a series of insights about the nature of “institutions,” as distinct from the “technologies” used by them. This distinction is a good starting point for understanding the problems that arise for people who create new technologies within institutions.[52]

An important aspect of Veblen’s concept of “institution” is that they are by nature non-dynamic—they resist changes that don’t benefit the top people in the hierarchical structure. Hierarchy persists through what Veblen called “ceremonial aspects,” traditional privileges that served to elevate the decision-makers. It is new technological tools and processes which make the institution profitable. But so-called “spurious” tools may be also be produced because they have ceremonial aspects that make management look or feel good.[53]

After the Great Depression, the historian and sociologist Lewis Mumford would develop the idea that “technology” had a dual nature. Polytechnic developments involved complex frameworks which combined technologies to solve real human problems; Monotechnic developments were technology for its own sake.[54] Monotechnics oppress human beings, Mumford argued, citing the automobile as one such development that crowded out pedestrians and bicyclists from roads, and led to a massive annual death toll on American highways.

The institutions of the day, corporations and governments, Mumford called megamachines. Megamachines, he said, are comprised of many human beings, each with a specialized role in a larger bureaucracy. He called these individuals “servo units.” Mumford argued that for these people, the specialized nature of the work weakened psychological barriers against questionable commands from leadership, because each individual was responsible for only one small aspect of the machine’s overall goal. At the top of a megamachine sat a corporate scion, dictator, or commander to whom god-like attributes were attributed. He cited the lionization of Egyptian Pharaohs and Soviet dictators as examples.

Ceremonial, spurious, monotechnic developments could lead to extremely deadly megamachines, said Mumford, as in the case of the Nazi War Machine. This phenomenon owed itself to the abstraction of the work into sub-tasks and specialties (such as assembly line work, radio communications). This abstraction allowed the servo-units to work on extreme or heinous projects without ethical involvement, because they only comprised one small step of the larger process. Mumford called servo-units in such a machine “Eichmanns,” after the Nazi official who coordinated the logistics of the German concentration camps in World War II.

In the early 20th century, the new and trendy field of “management science” was greatly influenced by Fordism: the practices of Henry Ford. Fordist mass production was characterized by a rigorous and somewhat dreary focus on efficiency, specialization, mass production, reasonable hours, and living wages.[55] But when the Great Depression came, owners like Ford laid off workers by the tens of thousands. Wages dropped, but the punishing nature of the work remained.

Ford Motor Company laid off 60,000 workers in August of 1931. Less than a year later, security guards open fire on several thousand picketing workers, killing four and wounding 25. Henry Ford placed machine gun nests around his home, and equipped guards with teargas and surplus ammunition.[56] As the 1930s wore on, American workers continued to riot and picket against ruthless owners’ tactics.

Modern management emerges to protect workers (1930-1940)

After the Depression, a class of professionals emerged to take major business decisions away from the business owners. Industry would be run by professional managers, who would execute plans in the best interest of both the owners and the employees. They derived their positions and power from their competence, not their percentage of ownership. The greedy shareholders could be held at bay in this new structure. [57] John Kenneth Galbraith, the Harvard economics professor, studied this phenomenon at the time:

“The power passed from one man—there were no women, or not many—into a structure, a bureaucracy, and that is the modern corporation: it is a great bureaucratic apparatus to which I gave the name the Technostructure. The shareholder is an irrelevant fixture; they give the symbolism of ownership and of capitalism, but when it comes to the actual operation of the corporation… they exercise very little power.”[58]

This “bureaucratic apparatus” of the Technostructure consisted of upper tier managers, analysts, executives, planners, administrators, operational “back office” staff, sales and marketing, controllers, accountants, and other non-technical white-collar staff. [59]

In 1937, Nobel Prize winner Ronald Coase built on the ideas of the managerial scientists to theorize why these massive firms were emerging, and why they accumulated so many workers. He theorized this behavior was rational, and was aimed at reducing transaction costs. He wrote:

“The source of the gain from having a firm is that the operation of a market costs something and that, by forming an organization and allowing the allocation of resources to be determined administratively, these costs are saved.”[60]

In other words, in the hiring of skilled labor, it is cheaper to retain a salaried worker who returns each day, than to go out each day and select a new temporary candidate from a pool of contractors in a “market.” He continued:[61]

“Firms will emerge to organize what would otherwise be market transactions whenever their costs were less than carrying out the transactions through the market.”

The corporation was the most efficient way to mass produce and distribute consumer goods: it tied together supply chains, production facilities, and distribution networks under centralized management.[62] This increased efficiencies and productivity, lowered marginal costs, and made goods and services cheaper for consumers.

Managerial bureaucracy becomes abusive to the engineer class (1940-1970)

As of 1932, the majority of these corporations were, in all practicality, no longer controlled by their majority shareholders, classified by economists as “management-controlled.”[63] The management fad which became known as “separation of ownership and control” spread throughout the major public corporations.

The moral hazards of management-controlled companies became increasingly obvious as the 1930s wore on. Management-controlled companies were run by executives which, despite not owning many shares, eventually achieved “self-perpetuating positions of control” of policies, because they are able to manipulate the boards of directors through proxies and majority shareholder votes.[64] These machinations sometimes created high levels of conflict. In the early 1940s, the idea emerged that this structural divide in the corporate world was being mimicked in the social and political worlds, with a distinct elite “management class” emerging in society.[65]

Institutional economists drew a distinction between the management class and the class of “technical operators” (the people doing the work, in many cases engineers and technicians). The managerial elite consisted of the “analysts” or “specialists” who acted as the bureaucratic planners, budgetary allocators, and non-technical managers.[66]

A strange power dynamic emerged between the analysts and the technical staff in the computer companies which had emerged between 1957 and 1969; this dynamic was studied by industrial economists in both the UK and US.[67] They found that the analysts jockeyed for power, creating conflict. They won favor and influence over the company by expanding their divisions, creating opportunities to hire more direct reports, or to win a new promotion, a tactic known as “‘empire building.” [68] The overall effect on the organization was misallocation of resources and incredible pressure to grow.[69] Sales and development cycles were persistently rushed. The computer analysts’ slogan became, ‘if it works, it’s obsolescent.’” The analysts had ‘a vested interest in change.’”[70]

This dynamic had created dysfunction. Managers used a variety of social tactics to enforce their will and agenda, in spite of technical realities, reflecting Veblen’s observation about “ceremonial” institutions 75 years before.[71] Documented tactics included:

  • Organizational inertia: New and threatening ideas are blocked with advice “idea killers" including: “the boss won’t like it,” “it’s not policy,” “I don’t have the authority,” “it’s never been tried,” “we’ve always done it that way,” and “why change something that works?”
  • Budget games: “Foot in the door,” where a new program is sold in modestly, concealing its real magnitude; “Hidden ball,” where a politically unattractive program is concealed within an attractive one; “Divide and conquer,” where approval of a budget request is sought from more than one supervisor; “It’s free,” where it is argued that someone else will pay for the project so the organization might as well approve it; “Razzle-dazzle,” where a request is supported with voluminous data, but arranged in such a way that their significance is not clear; “Delayed Buck,” where deliverables are submitted late, with the argument that the budget guidelines require too much detailed calculation; and many others.

These tales from the 1960s anticipate the emergence of the popular cartoon Dilbert in the 1990s, which skewered absurd managerial behavior. Its author, Scott Adams, had worked as a computer programmer and manager at Pacific Bell from 1986 to 1995.[72]

Figure 3. Dilbert captured the frustration of software engineers in a corporate setting.
(Credit: Scott Adams)[73]


Group identity develops amongst professional technologists (1980-2000)

The dictatorial behavior of the management class belied the true balance of power in technical organizations.

In the 1980s, the entire weight of many industrial giants rested upon its technologists. But their role put them in a strange position, at odds with the rest of their organization. Placed at the margins of the organization, closest to the work, they were removed from the C-suite and its power plays. Not working with executives directly, the technologists identified far less with the heads of the company than the managers, who directly reported to C-suite.[74]

The technologists’ work was enjoyable to them, but opaque to the rest of the organization. A power dynamic emerged between the technical operators and the rest of the company; their projects were difficult to supervise, and proceeded whimsically, in ways that reflected the developers’ own interests.[75]

Their power to work this way originated in their critical skills. These skills act as a wedge within organizations, earning technical operators considerable freedom of direction. The efficacy of this wedge increased when the technical operator provided a skill which was in great demand, affording them job mobility. In this instance, their dependence on the organization was reduced. Company ideology was typically not a strong force amongst technologists, in comparison to “professional ideology,” or the belief in the profession and its norms.[76] The elite technologists were becoming outsiders within their own companies.

Instead of loyalty to company or CEO, technologists developed, as a professional goal, loyalty to the end-user or client. A company’s technologists were focused on the needs of the existing customer, while the analysts and managers (whose work did not deal directly with the end-user) supported more abstract goals like efficiency and growth.[77]

The hacker movement emerges

The hacker movement had originated amongst software-makers at MIT in the 1960s.[78] Perhaps seen as an antidote to the managerial dysfunction inside the older corporate tech companies, the hacker movement’s focus on practical, useful, and excellent software spread rapidly across the country in the 1980s and 1990s.[79] MIT software activist Richard Stallman described hackers as playful but diligent problem-solvers who prided themselves on their individual ingenuity: [80] [81]

“What they had in common was mainly love of excellence and programming. They wanted to make their programs that they used be as good as they could. They also wanted to make them do neat things. They wanted to be able to do something in a more exciting way than anyone believed possible and show ‘Look how wonderful this is. I bet you didn’t believe this could be done.’ Hackers don’t want to work, they want to play.”

At a conference in 1984, a hacker who had gone to work at Apple to build the Macintosh described hacker status as follows: “Hackers can do almost anything and be a hacker. It’s not necessarily high tech. I think it has to do with craftsmanship and caring about what you’re doing.” [82]

The hacker movement is not unlike the Luddite movement of the early 19th century, in which cotton and wool artisans in central England rose up to destroy the Jaquard loom which threatened to automate them.[83] Unlike the Luddites, who proposed no better alternative to the loom, hackers came up with another approach to making software which has since produced superior products to their commercial alternatives. By using the Internet to collaborate, groups of volunteer developers have come to produce software that rivaled the products of nation states and corporations.[84]

New Jersey style emerges

The “New Jersey style” of hacking was originated by Unix engineers at AT&T in suburban New Jersey. AT&T had lost an antitrust settlement in 1956 which precluded it from entering the computer business; thus it was free to circulate the computer operating system it had built, called Unix, to other private companies and research institutions throughout the 1970s. The source code was included, and these institutions regularly modified it to run on their particular minicomputers. Hacking Unix became a cultural phenomenon within R&D departments around the US.

Unix was rewritten for personal computers by several groups of developers. Linus Torvalds created his own version, “Linux,” and distributed it for free, just as AT&T had done with Unix. (As we will show, Linux has become enormously successful.) The approach taken by Torvalds’ and other Unix hackers uses playfulness as an energizing force to build useful (if difficult) free software projects.[85] The Finnish computer scientist and philosopher Pekka Himanen wrote at the time: “To do the Unix philosophy right, you have to be loyal to excellence. You have to believe that software is a craft worth all the intelligence and passion you can muster.“[86]

R&D developers realize “Worse is Better”

Out of New Jersey style, software engineers developed a set of ad-hoc design principles that went against the perfectionism of institutionalized software. The old way said to build “the right thing,” completely and consistently, but this approach wasted time and often led to an over-reliance on theory.

Written during the early 1980s by Richard Gabriel and published by Netscape Navigator engineer Jamie Zawinski in 1991, the “worse-is-better” philosophy boiled down the best of New Jersey style and hacker wisdom. It was seen as a practical improvement on the MIT-Stanford hacker approach. Much like the MIT ethic, worse-is-better values excellence in software. But unlike MIT-Stanford, the worse-is-better approach redefines “excellence” in a way that prioritizes positive real-world user feedback and adoption over theoretical ideals.

Worse-is-better holds that, so long as the design of the initial program is a clear expression of a solution to a specific problem, then it will take less time and effort to implement a “good” version initially, and adapt it to new situations, than it will to build a “perfect” version straight away. Releasing software to users early and improving a program often is sometimes called “iterative” development.

Iterative development allows software to spread rapidly and benefit from real-world reactions from users. Programs released early and improved often become successful long before “better” versions written in the MIT approach have a chance to be deployed. With two seminal papers in 1981 and 1982, the concept of “first-mover advantage” emerged in the software industry around the same time that Gabriel was formalizing his ideas about why, in networked software, “worse is better.” [87] [88]

The logic of worse-is-better prioritizes viral growth over fit and finish. Once a “good” program has spread widely, there will be many users with an interest in improving its functionality and making it excellent.[89] An abbreviated version of the principles of “worse is better” are below. They admonish developers to avoid doing what is conceptually pleasing (“the right thing”) in favor of doing whatever results in practical, functional programs (emphasis added):

  • Simplicity: This is the most important consideration in a design.
  • Correctness: The design must be a correct solution to the problem. It is slightly better to be simple than correct.
  • Consistency: Consistency can be sacrificed for simplicity in some cases, but it is better to drop those parts of the design that deal with less common circumstances than to introduce either implementational complexity or inconsistency.
  • Completeness: The design must cover as many important situations as is practical. Completeness can be sacrificed in favor of any other quality. In fact, completeness must be sacrificed whenever implementation simplicity is jeopardized.

These conceptual breakthroughs must have been exciting to the technologists of the early 1980s. But the excitement would soon be disrupted by rapid changes in business.

The shareholders use hostile takeovers to clamp down on everyone

The hacker-centric environment inside universities and large research corporations collapsed, and researchers at places like the MIT AI Lab were poached away by venture capitalists to continue their work, but in a proprietary setting.[91] The hostile take-over trend had begun a decade before in the UK, where clever investors began noticing that many of the family-run businesses were no longer majority owned by their founding families. Financiers like Jim Slater and James Goldsmith quietly bought up shares in these companies, eventually wrestling enough control to break up and sell off units of the company. This became known as “asset stripping,” and we will return to this topic in Section VII of this essay.[92]

In the 1980s, American bankers hit upon a way finance takeovers at massive scale by floating so-called junk bonds, then busting up the target company and reaping massive rewards from the sale of the parts.[93] In this way, managerial capitalism eventually lost its hold over business, and became a servant of the capital markets.

“Activist investors” came to represent shareholder interests, and took action to fire and hire C-suite executives who would maximize share price.[94] As the 1990s dawned, many hackers saw their companies struggle to contend with shareholder demands, the threat of hostile takeover, and competition from new Silicon Valley startups.

As tech companies moved faster, they developed ways for management to enforce policy and resource allocation. Microsoft and others adopted a rigorous “stack ranking” system whereby employees were assigned numerical scores on regular intervals using a “performance review” process, in order to determine promotions, bonuses, and team assignments. A certain percentage of bottom-ranking employees were fired. This system is still used by tech companies today, but Microsoft abandoned it in 2013.[95] Google adopted stack ranking recently to establish eligibility for promotions, but does not fire poorly-scoring employees.[96] Stack ranking systems are widely hated for the uncomfortable power dynamics they create.[97] [98]

Today, investors demand from their companies precise predictions about each quarter’s profitability, and less concern is paid to capital investment. Tesla is one notable technology company which has articulated the way quarterly guidance and short-termism diminish a high-tech company’s long-term prospects.[99] According to the Business Roundtable, a corporate alliance chaired by Chase Bank CEO Jamie Dimon, quarterly guidance has become “detrimental [to] long term strategic investments.”[100]


In this section, we have looked at the ways that 1940s-era management make life unpleasant for high-tech workers, and how these patterns persisted into the 1990s, disenfranchising technical workers. We’ve shown a strong “guild” identity developed which transcends loyalty to the employer. We’ve associated this identity with the growth of hacker culture and its principles.

Next, we will explore how antipathy towards the management class grew into a wider suspicion of all institutional oversight, and how their struggle to get out from under such oversight acquired a moral dimension. We will examine why hackers looked to cyberspace and cryptography for sanctuary, with a determination to build new tools outside the purview of the management class. We will consider the surprising success of free software tools produced by hackers, and consider the ways that corporate employers have alternately fought, and also tried to emulate, hacker methodology. Finally, we will encounter Bitcoin as the realization of many hacker ambitions in a single network.

Section ISection III

Foot Notes

[35] Poynder, John. Literary extracts from English and other works; collected during half a century. London: John Hatchard & Son. 1844. https://iterative.tools/2QV38gn.

[36] Reddit.com. https://i.redd.it/t8af0raeoriz.png

[37] “Tech Won’t Build It: The New Tech Resistance,” MIT Panel Discussion, July 2018, https://iterative.tools/2zsbbL1.

[38] Hsu, Jeremy. “Engineers Say ‘No Thanks’ to Silicon Valley Recruiters, Citing Ethical Concerns.” IEEE Spectrum: Technology, Engineering, and Science News, IEEE Spectrum, 9 Aug. 2018, https://iterative.tools/2xzPDus.

[39] “Is This the Beginning of a Tech Worker Revolution?” KQED, August 27, 2018, https://iterative.tools/2QWjTI2.

[40] Bergen, Mark. “Google Renounces AI Weapons; Will Still Work With Military,” June 7, 2018. https://iterative.tools/2QFXAW9.

[41] Conger, Kate, and Daisuke Wakabayashi. “Google Employees Protest Secret Work on Censored Search Engine for China.” The New York Times, August 16, 2018, https://iterative.tools/2LT6Tz2.

[42] Matsakis, Louise. “Ex-Microsoft employees sue over PTSD from reviewing disturbing content,” Mashable, January 14, 2017, https://iterative.tools/2IdMd4A.

[43] Swisher, Kara. “The Expensive Education of Mark Zuckerberg and Silicon Valley.” The New York Times, The New York Times, August 2, 2018, https://iterative.tools/2MZReyv.

[44] Wong, Queenie. “Menlo Park, East Palo Alto Residents Rally against Facebook, Amazon amid Gentrification Concerns.” The Mercury News, April 6, 2017, https://iterative.tools/2OPyy6m.

[45] Conger, Kate, and Sheera Frenkel. “Dozens at Facebook Unite to Challenge Its ‘Intolerant’ Liberal Culture.” The New York Times, August 28, 2018, https://iterative.tools/2NDwfGX.

[46] Tayan, Brian, and Stanford Graduate School of Business. “The Wells Fargo Cross-Selling Scandal.” The Harvard Law School Forum on Corporate Governance and Financial Regulation, https://iterative.tools/2OeVmQ4.

[47] Miller, Ron. “Equifax Data Leak Could Involve 143 Million Consumers.” TechCrunch, September 7, 2017, https://iterative.tools/2QVRrG8.

[48] Israni, Ellora Thadaney. “When an Algorithm Helps Send You to Prison.” The New York Times, October 26, 2017, https://iterative.tools/2NHgcrS.

[49] Bilton, Nick. “‘We’re F–Ked’: There Is Only One Antidote to Silicon Valley’s Ills… Their Engineers.” The Hive, Vanity Fair, August 31, 2018, https://iterative.tools/2OMit1f.

[50] Mitcham, Carl. Encyclopedia of Science, Technology, and Ethics. Macmillan Reference USA, 2005. Pages 1152-1153. https://iterative.tools/2xO8QIh.

[51] Bemis, Edward W. “The Homestead Strike.” Journal of Political Economy, v.2 no.3, 1894, pp. 369–396. https://iterative.tools/2QRf8iO.

[52] Waller, William T. “The Evolution of the Veblenian Dichotomy: Veblen, Hamilton, Ayres, and Foster.” Journal of Economic Issues, vol. 16, no. 3, 1982, pp. 757–771. https://iterative.tools/2NGiuHF.

[53] Ibid.

[54] Mumford, Lewis. Technics and Civilization. Harcourt Brace and Co., 1934. https://iterative.tools/2Q4wXK9.

[55] “Fordism & Postfordism,” Willamette University, https://iterative.tools/2MWSwuc.

[56] “PBS Presentation: The Great Depression,” April 2013, https://iterative.tools/2DqJJRS.

[57] Galbraith, John Kenneth, “The Mayfair Set: Destroy the Technostructure,” BBC, 1999. https://iterative.tools/2NDEwLa.

[58] Ibid.

[59] Galbraith, John Kenneth, The New Industrial State. Mifflin, 1971. https://iterative.tools/2Q4UxWX.

[60] Coase, and R. H. “Industrial Organization: A Proposal for Research.” NBER, University of Chicago Press, https://iterative.tools/2NExMMM.

[61] Coase, R. H. “The Firm, the Market, and the Law,” National Bureau of Economics, 1987, Page 7. https://iterative.tools/2zrqIdJ.

[62] Rifkin, Jeremy. The Zero Marginal Cost Society. Palgrave Macmillan, 2014. Page 82. https://iterative.tools/2QTQIoR.

[63] Berle, Adolph. The Modern Corporation and Private Property. Routledge, 1991. https://iterative.tools/2NGle7V.

[64] Ibid.

[65] Burnham, James. The Managerial Revolution. Penguin Books, 1945. Page 118. https://iterative.tools/2zrrpnl.

[66] Mintzberg, Henry. Power in and around Organizations. Prentice-Hall, 1983. Page 136. https://iterative.tools/2xQIYv6.

[67] Pettigrew, Andrew M. The Politics of Organizational Decision Making. Routledge, 2009. Page 77. https://iterative.tools/2xQIYv6

[68] Ibid, page 129.

[69] Ibid, page 133.

[70] Ibid.

[71] Argyris, Chris. Overcoming Organizational Defenses: Facilitating Organizational Learning. Prentice-Hall, 1990. Page 7. https://iterative.tools/2IdG6xj.

[72] “Scott Adams,” Wikipedia, https://en.wikipedia.org/wiki/Scott_Adams.

[73] Dilbert by Scott Adams, Nov. 14, 2014. https://iterative.tools/2QWDE28.

[74] Mintzberg, page 130.

[75] Ibid.

[76] Ibid, page 132.

[77] Ibid.

[78] Castro, Jose Dieguez. Introducing Linux Distros. Apress, 2016. Page 10. https://iterative.tools/2NDOQTo.

[79] Gehring, Verena V. The Internet in Public Life. Rowman & Littlefield, 2004. https://iterative.tools/2pA6Fo0.

[80] Richard Stallman, “Hackers: Wizards of the Electronic Age.” YouTube, 1986, https://iterative.tools/2xAzdlI.

[81] “The Hacker Community and Ethics: An Interview with Richard M. Stallman” Gnu.org, 2002, https://iterative.tools/2Q4LTId.

[82] “The Hacker Ethic,” New York Times, 2001, https://iterative.tools/2pxa7j2.

[83] “Luddite,” Wikipedia, https://en.wikipedia.org/wiki/Luddite.

[84] Soderberg, Johan. Hacking Capitalism: the free and open source software movement. Routledge, 2008. Page 2. https://iterative.tools/2O8Avhe.

[85] Himanen, Pekka. The Hacker Ethic: a Radical Approach to the Philosophy of Business. Random House, 2002. https://iterative.tools/2MWXvLq.

[86] Himanen, Pekka. “The Hacker Ethic,” New York Times, 2001, https://iterative.tools/2pxa7j2.

[87] Spence, A. Michael, RAND Corp., “The learning curve and competition”. Bell Journal of Economics, Spring 1981, 12 (1): pages 49–70. https://iterative.tools/2xP5EvX.

[88] Gilbert, R.J., and Newbery, David, "Preemptive patenting and the persistence of monopoly.” The American Economic Review. June 1982, 72 (3): pages 514–526. https://iterative.tools/2Ob9lpV.

[89] Gabriel, Richard. “The Rise of “Worse Is Better,” 1991, https://iterative.tools/2ONxFuV.

[90] Ante, Spencer. Creative Capital: Georges Doriot and the Birth of Venture Capital. Harvard Business Review Press, 2008. https://iterative.tools/2Rtk1z8.

[91] Fogel, Karl. Producing Open Source Software. O’Reilly Media, 2006. Page 7. https://iterative.tools/2zshkXn.

[92] Cowe, Roger. “Jim Slater Obituary.” The Guardian, Guardian News and Media, 22 Nov. 2015, https://iterative.tools/2xBUkUy.

[93] Staff, Investopedia. “Michael Milken.” Investopedia, 15 May 2018, https://iterative.tools/2IcxjLU.

[94] “The Mayfair Set: Episode 3, Destroy the Technostructure,” BBC, 1999. https://iterative.tools/2NDEwLa.

[95] Warren, Tom, “Microsoft Axes its Controversial Stack Ranking System,” Verge.com, Nov 12, 2013, https://iterative.tools/2wDAtTN

[96] “Why Does Google Stack Rank Its Employees despite the High Rate of Disapproval and Controversy within the Company?” Quora, https://iterative.tools/2DqMHpa.

[97] Nisen, Max. “A Lawsuit Claims Microsoft’s Infamous Stack Rankings Made Things Worse for Women.” Quartz, 17 Sept. 2015, https://iterative.tools/2OMnkzC.

[98] Korytko, Andrew. “Do We Need Scheduled Stack-Ranking Anymore?” Forbes Magazine, 12 Dec. 2017, https://iterative.tools/2DspRNX.

[99] “Taking Tesla Private.” Tesla, Inc, 8 Aug. 2018, https://iterative.tools/2Q77XlB.

[100] “Business Roundtable Supports Move Away from Short-Term Guidance,” Business Roundtable, https://iterative.tools/2Oflg6g.


Section III

Understanding How The Key Participants Organize

How hackers approached the building of their own private economy

“Every good work of software starts by scratching a developer’s personal itch.”

— Eric S. Raymond, speaking at the Linux Kongress, Würzburg, Germany, 1997.

In this section we explore how the World Wide Web brought hackers together on message-boards and email chains, where they began to organize. We look at their ambition to a build private networks, and how they staked out requirements to build such a network using the lessons learned in earlier decades.

Hackers begin developing “free” software

Out of the hacker culture grew an informal system of collaborative software-making that existed outside of any individual company.[101] Known as the “free” or “open source” software movement, and abbreviated FOSS, this social movement sought to popularize certain ethical priorities in the software industry. Namely, it lobbied for liberal licensing, and against collecting or monetizing data about users or the way they are using a given piece of software.

In a software context, the term “free” does not refer to the retail price, but to software “free” to distribute and modify. This sort of freedom to make derivative works is philosophically extended to mean “free of surveillance and monetization of user data through violations of privacy.” What exactly is the link between software licensing and surveillance? The Free Software Foundation says of commercial software:[102]

If we make a copy and give it to a friend, if we try to figure out how the program works, if we put a copy on more than one of our own computers in our own home, we could be caught and fined or put in jail. That’s what’s in the fine print of the license agreement you accept when using proprietary software. The corporations behind proprietary software will often spy on your activities and restrict you from sharing with others. And because our computers control much of our personal information and daily activities, proprietary software represents an unacceptable danger to a free society.

Although the Free Software Foundation drew on philosophies from 1970s hacker culture and academia, its founder, MIT computer scientist Richard Stallman, effectively launched the Free Software movement in 1983 by launching GNU, a free and open source set of software tools. (A complete OS did not arrive until Linus Torvalds’ kernel was released in 1991, allowing GNU/Linux to become a real alternative to Unix.) [103]

Stallman founded the Free Software Foundation in 1985. This prescient cause foresaw the personal data hazards that might arise from platforms like Facebook, whose sloppy data vendor relationships resulted in the violation of privacy of at least 87 million people in 2016.[104] A bug allowed attackers to gain control over 50 million Facebook accounts in 2018.[105]

The GNU Manifesto explicitly calls out the corporate work arrangement as a waste of time. It reads in part (emphasis added):

“We have already greatly reduced the amount of work that the whole society must do for its actual productivity, but only a little of this has translated itself into leisure for workers because much nonproductive activity is required to accompany productive activity. The main causes of this are bureaucracy and isometric struggles against competition. The GNU Manifesto contends that free software has the potential to reduce these productivity drains in software production. It announces that movement towards free software is a technical imperative, ‘in order for technical gains in productivity to translate into less work for us.’”[106]

We have defined free software to mean “free of monetization techniques which contravene user privacy.” In most cases, free software is free of all the trappings of commercialization, including: restrictive copyrights, expensive licenses, and restrictions on alterations and redistribution. Bitcoin and Linux are examples of free software in both senses: both that it is free of surveillance, and also free to distribute and copy.

A system of values has evolved amongst free software developers, who distinguish themselves from proprietary software companies, which do not share their internal innovations publicly for others to build on; and who track users and sell their personal data.

Stallman’s primary critique of commercial software was the preoccupation with unproductive competition and monetization:

“The paradigm of competition is a race: by rewarding the winner, we encourage everyone to run faster…. [But] if the runners forget why the reward is offered and become intent on winning, no matter how, they may find other strategies—such as, attacking other runners. If the runners get into a fist fight, they will all finish late. Proprietary and secret software is the moral equivalent of runners in a fist fight…… There is nothing wrong with wanting pay for work, or seeking to maximize one’s income, as long as one does not use means that are destructive. But the means customary in the field of software today are based on destruction. Extracting money from users of a program by restricting their use of it is destructive because the restrictions reduce the amount and the ways that the program can be used. This reduces the amount of wealth that humanity derives from the program. When there is a deliberate choice to restrict, the harmful consequences are deliberate destruction.”[107]

The “non-productive work” cited by Stallman harkens back to Veblen’s conception of “spurious technologies” developed in the service of some internal ceremonial purpose, to reinforce the existing company hierarchy: [108]

“Spurious ‘technological’ developments… are those which are encapsulated by a ceremonial power system whose main concern is to control the use, direction, and consequences of that development while simultaneously serving as the institutional vehicle for defining the limits and boundaries upon that technology through special domination efforts of the legal system, the property system, and the information system. These limits and boundaries are generally set to best serve the institutions seeking such control… This is the way the ruling and dominant institutions of society maintain and try to extend their hegemony over the lives of people.”

Hacker principles are codified in “Cathedral versus Bazaar”

In 1997, as the Web was gaining momentum, hacker Eric Raymond presented a metaphor for the way hackers developed software together. He compared the hacker approach, which relied on voluntary contributions, to a marketplace of participants who could interact as they wished: a bazaar.

Commercial software, he said, was like the building of a cathedral, with its emphasis on central planning and grand, abstract visions. The cathedral, he said, was over-wrought, slow, and impersonally designed. Hacker software, he claimed, was adaptable and served a larger audience, like an open bazaar.

With this metaphor in mind, Raymond codified 19 influential “lessons” on good practice in free open source software development.[109] Some of the lessons appear below:

  1. Every good work of software starts by scratching a developer’s personal itch.
  2. When you lose interest in a program, your last duty to it is to hand it off to a competent successor.
  3. Treating your users as co-developers is your least-hassle route to rapid code improvement and effective debugging.
  4. Given a large enough beta-tester and co-developer base, almost every problem will be characterized quickly and the fix obvious to someone.
  5. Often, the most striking and innovative solutions come from realizing that your concept of the problem was wrong.
  6. Perfection (in design) is achieved not when there is nothing more to add, but rather when there is nothing more to take away. (Attributed to Antoine de Saint-Exupéry)
  7. Any tool should be useful in the expected way, but a truly great tool lends itself to uses you never expected.
  8. Provided the development coordinator has a communications medium at least as good as the Internet, and knows how to lead without coercion, many heads are inevitably better than one.

These ideas would come to crystallize the hacker approach to building software.

Hacker sub-cultures collide in Cyberspace

As the Web proliferated, hacker subcultures collided on message-boards and forums. All of them found they had a core set of common attitudes and behaviors including:

  • Sharing software and information
  • Freedom of inquiry
  • The right to fork the software[110]
  • Distaste for authority
  • Playfulness and cleverness

But they had different ideas about how the Internet would develop in the future.

Utopian ideas about the power of computer networks to create post-capitalist societies had emerged as early as 1968.[111] The utopians thought networked computers might allow society to live in a kind of Garden of Eden, mediated by autonomous computerized agents, free of labor, and co-existing with nature. [112] [113]

There were also dystopian visions. A young fiction writer William Gibson first coined the term “cyberspace” with his 1981 short story Burning Chrome.” In his conception, cyberspace was a place where massive corporations could operate with impunity. In his story, hackers could enter into cyberspace in a literal way, traversing systems that were so powerful that they could crush human minds. In cyberspace, Gibson imagined, government was powerless to protect anyone; there were no laws, and politicians were irrelevant. It was nothing but the raw and brutal power of the modern conglomerate.[114] Gibson, Bruce Sterling, Rudy Rucker and other writers went on to form the core of this radically dystopian literary movement.

The Utopians start getting rich

Another group of hackers hailed from the original 1960s counterculture. Many of them had a sanguine outlook on the Web as a new safe world where radical things could come true. Like with the acid counterculture, cyberspace could be a place where individuals were liberated from old corrupt power hierarchies.[115]

This optimistic view pervaded the entrepreneurial circles of Silicon Valley in the 1980s and 1990s, creating an extremely positive view of technology as both a force for good and a path to riches. One British academic wrote at the time:[116]

“This new faith has emerged from a bizarre fusion of the cultural bohemianism of San Francisco with the hi-tech industries of Silicon Valley… [It] promiscuously combines the free-wheeling spirit of the hippies and the entrepreneurial zeal of the yuppies. This amalgamation of opposites has been achieved through a profound faith in the emancipatory potential of the new information technologies. In the digital utopia, everybody will be both hip and rich.”

The ideas of the “aging hippies” culminated with the “Declaration of Independence of Cyberspace” in 1996, written by a former Grateful Dead lyricist named John Perry Barlow, who had been part of the acid counterculture.[117] By the mid-1990s, Silicon Valley startup culture and the upstart Wired magazine were coalescing around Barlow’s utopian vision of the World Wide Web. He began holding gatherings he called Cyberthons, as an attempt to bring the movement together. They unintentionally became a breeding ground for entrepreneurship, says Barlow:

“As it was conceived, [Cyberthon] was supposed to be the 90s equivalent of the Acid Test, and we had thought to involve some of the same personnel. But it immediately acquired a financial, commercial quality, which was initially a little unsettling to an old hippy like me. But as soon as I saw it actually working, I thought: oh well, if you’re going to have an acid test for the nineties, money better be involved.”[118]

Emergence of Cypherpunk movement

But while the utopians believed everyone would become “hip and rich,” the dystopians believed that a consumer Internet would be a panopticon of corporate and governmental control and spying, the way William Gibson had imagined. They set out to save themselves from it.

They saw a potential solution emerging in cryptographic systems to escape surveillance and control. Tim May, Intel’s Assistant chief scientist by day, wrote the Crypto-Anarchist Manifesto in 1992:[119]

“The technology for this revolution—and it surely will be both a social and economic revolution—has existed in theory for the past decade. The methods are based upon public-key encryption, zero-knowledge interactive proof systems, and various software protocols for interaction, authentication, and verification. The focus has until now been on academic conferences in Europe and the U.S., conferences monitored closely by the National Security Agency. But only recently have computer networks and personal computers attained sufficient speed to make the ideas practically realizable.”

Until recently, strong cryptography had been classified as weapons technology by regulators. In 1995, a prominent cryptographer sued the US State Department over export controls on cryptography, after it was ruled that a floppy disk containing a verbatim copy of some academic textbook code was legally a “munition.” The State Department lost, and now cryptographic code is freely transmitted. [120]

Strong cryptography has an unusual property: it is easier to deploy than to destroy. This is a rare quality for any man-made structure, whether physical or digital. Until the 20th century, most “secure” man-made facilities were laborious to construct, and relatively easy to penetrate with the right explosives or machinery; castles fall to siege warfare, bunkers collapse under bombing, and secret codes are breakable with computers. Princeton computer scientist professor Arvind Narayan writes:[121]

“For over 2,000 years, evidence seemed to support Edgar Allan Poe’s Assertion, ‘human ingenuity can-not concoct a cypher which human ingenuity cannot resolve,’ implying a cat-and-mouse game with an advantage to the party with more skills and resources. This changed abruptly in the 1970s owing to three separate developments: the symmetric cipher DES (Data Encryption Standard), the asymmetric cipher RSA, and Diffie-Hellman key exchange.”

Of the 1990s, he says:

“For the first time, some encryption algorithms came with clear mathematical evidence (albeit not proofs) of their strength. These developments came on the eve of the microcomputing revolution, and computers were gradually coming to be seen as tools of empowerment and autonomy rather than instruments of the state. These were the seeds of the ‘crypto dream.’”[122]

Cypherpunks were a subculture of the hacker movement with a focus on cryptography and privacy. They had their own manifesto, written in 1993, and their own mailing list which operated from 1992 to 2013 and at one point numbered 2,000 members.[123] A truncated version of the manifesto is reproduced below. In the final lines, it declares a need for a digital currency system as a way to gain privacy from institutional oversight:

The Cypherpunk Manifesto

The term “cypherpunk” is a play on words, derived from the term “cyberpunk,” the sub-genre of science fiction pioneered by William Gibson and his contemporaries.[124] The Cypherpunk Manifesto reads:

“Therefore, privacy in an open society requires anonymous transaction systems. Until now, cash has been the primary such system. An anonymous transaction system is not a secret transaction system. An anonymous system empowers individuals to reveal their identity when desired and only when desired; this is the essence of privacy. Privacy in an open society also requires cryptography… We cannot expect governments, corporations, or other large, faceless organizations to grant us privacy out of their beneficence. It is to their advantage to speak of us, and we should expect that they will speak. To try to prevent their speech is to fight against the realities of information. Information does not just want to be free, it longs to be free. Information expands to fill the available storage space. Information is Rumor’s younger, stronger cousin; Information is fleeter of foot, has more eyes, knows more, and understands less than Rumor. We must defend our own privacy if we expect to have any. We must come together and create systems which allow anonymous transactions to take place. People have been defending their own privacy for centuries with whispers, darkness, envelopes, closed doors, secret handshakes, and couriers. The technologies of the past did not allow for strong privacy, but electronic technologies do. We the Cypherpunks are dedicated to building anonymous systems. We are defending our privacy with cryptography, with anonymous mail forwarding systems, with digital signatures, and with electronic money.”

There would be many attempts to create digital money systems, some by the names on the mailing list. One of the individuals on the mailing list was Satoshi Nakamoto. Another was Tim May, the originator of crypto-anarchy; Wei Dai, an originator of the original concept of P2P digital currency; Bram Cohen, creator of BitTorrent; Julian Assange, who would later go on to found WikiLeaks; Phil Zimmerman, the creator of PGP encryption; Moxie Marlinspike, developer of the OpenWhisper protocol and the Signal Messenger application; and Zooko Wilcox-O’hearn of the Z-cash project.[125] [126]

Cryptographic systems acquire a “moral quality”

Modern-day engineers have made repeated efforts to create organizations which enforce ethical principles in their fields, including:

    1. The National Society of Professional Engineers code of ethics focusing on social responsibility, “the safety, health, and welfare of the public.”
    1. IEEE.22 The Union of Concerned Scientists is formed at MIT.
    1. International Association for Cryptologic Research (IACR) is formed to advance the use cryptography in the interest of public welfare.
    1. The Electronic Frontier Foundation (EFF) is formed.

The technological optimism that characterized 1990s Silicon Valley also laid some of the industry’s growing ethical traps. In a 2005 paper entitled “The Moral Character of Cryptographic Work,” UC Davis Computer Science Professor Phillip Rogaway suggested that practitioners of technology should examine closely the assumption that software by nature was “good” for anyone:[127]

“If you’re a technological optimist, a rosy future flows from the wellspring of your work. This implies a limitation on ethical responsibility. The important thing is to do the work, and do it well. This even becomes a moral imperative, as the work itself is your social contribution.”

Rogway suggests technologists re-focus themselves on a moral duty to build new encrypted systems that empower ordinary people:

“All that said, I do believe it accurate to say that conventional encryption does embed a tendency to empower ordinary people. Encryption directly supports freedom of speech. It doesn’t require expensive or difficult-to-obtain resources. It’s enabled by a thing that’s easily shared. An individual can refrain from using backdoored systems. Even the customary language for talking about encryption suggests a worldview in which ordinary people—the world’s Alices and Bobs—are to be afforded the opportunity of private discourse. And coming at it from the other direction, one has to work to embed encryption within an architecture that props up power, and one may encounter major obstacles to success.”

“Responsible” hackers begin organizing in the 1990s

Many free software projects had third-party coders contributing improvements back to the project out of altruism, integrating improvements they’ve made on their versions to the original. In this way, free software projects accumulated the work of hundreds or thousands of otherwise uncoordinated individuals, without any central organizing agent. This form of organization has become known as “open allocation.”

Open allocation refers to a style of management allowing a high degree of freedom to knowledge workers, who are empowered to start or join any area of the project, and decide how to allocate their time more generally. It is considered to be a form of “self organization” and is widely practiced outside of any corporate or partnership structure in the world of free software.

In open allocation, decision-making capabilities lie with the people closest to the problem being solved. Projects have a ‘primary responsible person,’ which is usually the person who has been working in that area the longest, or with the most influence. There are no arbiters of the direction of a project outside of the person or persons working on it. [128] Project leaders can rotate into being followers, or drift out entirely, only to be replaced by new collaborators. As opposed to traditional management structures, where power is fixed, in open allocation, positions of leadership are temporary distinctions.

How open allocation works, briefly

As we discussed in Section I, the “analysts” that make up the managerial corporate class typically have a vested interest in change. Marketing narratives may supercede engineering priorities. Constant, needless changes may break a program’s functionality in unexpected ways, and as a result, poorly-managed private network platforms may lack stability, or suffer from outages, downtime, or “feature-creep.” [129]

In open allocation free software projects, you propose changes you build. Non-technical managers are not there to think up spurious features, and even if such features are proposed, it’s unlikely anyone else will pick them up and build them.

Features or changes which are proposed, are generally expected to be implemented by the proposer, who is only permitted to commit code if the rest of the maintainers of the project agree that the problem being solved is real, and the solution is appropriate.

This alternative model for organizing work relations is considered the primary accomplishment of the free and open source software movement.[130]

Benefits of working open allocation

This system has many benefits, one of which is that it minimizes “technical debt.” Technical debt is a metaphor for the additional work created later, by quick and dirty solutions used today. In practice, technical debt can accrue easily from frivolous feature requests, redirections, changes, poor communication, and other issues. Technical debt can also be introduced by regulation and legislation enforced on software companies.

In this way, corporate management and governmental oversight are indistinguishable, both sources of forcible, monotechnic, ceremonial, spurious technological development—and debt.

If technical debt accumulates, it can be difficult to implement meaningful improvements to a program later on. Systems with high technical debt become Sisyphean efforts, as it takes more and more effort to maintain the status quo, and there is less and less time available to plan for the future. Systems like this require slavish dedication. They are antithetical to the type of work conducive to happiness. Technical debt has high human costs, as recounted by one developer’s anecdotal description (edited for length):[131]

  • Unpleasant Work: A code base high in technical debt means that feature delivery slows to a crawl, which creates a lot of frustration and awkward moments in conversation about business capability. When new developers are hired or consultants brought in, they know that they’re going to have to face confused looks, followed by those newbies trying to hide mild contempt. To tie this back to the tech debt metaphor, think of someone with mountains of debt trying to explain being harassed by creditors. It’s embarrassing, which is, in turn, demoralizing.
  • Team Infighting: Not surprisingly, this kind of situation tends to lead to bickering among the team. Again, the metaphor holds as one would expect this kind of behavior from a married couple with crippling debt. Teams draw battle lines. They add acrimony on top of the frustration and embarrassment of the problem itself.
  • Atrophied Skills: As embarrassment mounts and the blame game is played more vigorously, team members can feel their professional relevance slipping away. Generally speaking, they want to touch things as little as humanly possible, because doing so further impairs their already lethargic process. It’s too slow and it’s too risky.

Technical debt usually results from beginning a software project without having a clear conception of the problem being solved. As you add features, you misapprehend the actual goal of your intended users. As a result, you end up in an “anti-pattern.” Anti-patterns are patterns of design and action which, despite looking like the right path at the moment, turn out to induce technical debt. Anti-patterns are project- and company-killers because they heap on technical debt.[132]

By contrast, in an open allocation project with global significance, the benefits of open allocation governance are maximized. Those benefits include:[133]

  • Coordination: the people conceiving of the work are the ones doing the work.
  • Motivation: You’re choosing your own project, so you have more at stake.
  • Responsibility: Because you choose your assignment and you solve your own problems, you have nobody to blame but yourself if something doesn’t work.
  • Efficiency: Trusted with their own time, new collaborators set immediately to work. No bureaucratic hassles slow down programming.

As it turns out, people love open allocation. In 2005, MIT Sloan and Boston Consulting Group did a study about the motivations of open source software engineers. The study reports:[134]

We found that… enjoyment-based intrinsic motivation, namely how creative a person feels when working on the project, is the strongest and most pervasive driver" for voluntarily working on software… Many are puzzled by what appears to be irrational and altruistic behavior by movement participants: giving code away, revealing proprietary information, and helping strangers solve their technical problems… FOSS participants may be seeking flow states by selecting projects that match their skill levels with task difficulty, a choice that may not be available in their regular jobs.

This has led to an acknowledgement within managerial science of the sins of the 20th century. Now they are looking for ways to reorganize to push decision making to the operators!


Commercial software makers become begrudging copycats

The “open source” movement officially emerged in 1996, as a marketing program for free software adoption within businesses. It framed free software adoption in a way that businesses could understand.[135]

Stallman, the GNU creator, says the difference between free and open source software is a moral one: “Most discussion of ‘open source’ pays no attention to right and wrong, only to popularity and success.”[136]

Whatever the distinction, corporate technology giants panicked at the sudden invasion of software that anyone could license, copy, fork, deploy, modify, or commercialize. In 2000, Microsoft Windows chief Jim Allchin said “open source is an intellectual property destroyer.”[137] In 2001, Steve Ballmer said “Linux is a cancer that attaches itself, in an intellectual property sense, to everything it touches.” [138]

The fact remained: the methodologies of open source and open allocation-style governance were enjoyable, and produced very successful software. In 2001, a movement grew to bring open allocation methodologies into corporations. It was called “Agile Development,” and it was a desperate measure by the commercial software companies to hang onto relevance. If they couldn’t beat open source, they could join it and build commercial services and products on top. Copying the Cypherpunks and Cyberspace enthusiasts before them, the Agile proponents wrote a founding document. The Agile Manifesto read in part:[139]

“In order to succeed in the new economy, to move aggressively into the era of e-business, e-commerce, and the web, companies have to rid themselves of their Dilbert manifestations of make-work and arcane policies. This freedom from the inanities of corporate life attracts proponents of Agile Methodologies, and scares the begeebers (you can’t use the word ‘shit’ in a professional paper) out of traditionalists. Quite frankly, the Agile approaches scare corporate bureaucrats—at least those that are happy pushing process for process’ sake versus trying to do the best for the “customer” and deliver something timely and tangible and “as promised”—because they run out of places to hide.”

Free, open source Unix variants succeed wildly

Microsoft finally integrated Linux and the open source technologies into its enterprise Azure platform in 2012. Linux, for its part, bested Windows and other proprietary operating systems to become the foundation of the Web. Unix-like operating systems power 67 percent of all servers on Earth. Within that 67 percent, at least half of those run Linux. No matter what kind of computer or phone you’re using, when you surf the Web, you’re probably connecting to a Linux server. [140]

Other free open source libraries have also been successful within a corporate setting. Bloomberg LP uses and contributes code back to the open source Apache Lucene and Apache Solr projects, which are critical for search functions in its Terminal.[141] BSD, another open source Unix derivative, was the basis for macOS and iOS.[142] Google’s Android is based on Linux.[143]

BMW, Chevrolet, Mercedes, Tesla, Ford, Honda, Mazda, Nissan, Mercedes, Suzuki, and the world’s largest automobile company, Toyota all use Automotive Grade Linux in their vehicles. Blackberry and Microsoft both have vehicle platforms, but they are used by a minority of car OEMs. Volkswagen and Audi are moving to a Linux-based Android platform as of 2017.[144] [145]

Tesla, for its part, is open-sourcing its Linux distribution for the Model S and X cars, including the Tesla Autopilot platform, the kernel sources for hardware, and the infotainment system.[146]

These examples serve to demonstrate two counter-intuitive lessons about software generally:[147]

  1. The success of software frequently has an inverse relationship with the amount of capital behind it.
  2. Many of the most meaningful advances in computer technology have been the product of enthusiasts working outside the corporate or university system.

Modern organization design emerges in the hackers’ image

Today, many software companies experiment with some way to reduce reliance on management hierarchy. Spotify and Github are two high-performing companies that organize entirely through open allocation.

Spotify, for its part, has produced two in-depth videos about how its independent project teams collaborate. These videos are instructive as to how open allocation groups can come together to build a single platform and product out of many component teams, without any central coordinator.


Figure 4. Spotify’s “engineering culture” videos summarize how open allocation can work in a commercial software company. In practice, traditional companies have trouble adopting this organizational design without outside help.
(Credit: YouTube)

  • How Spotify Works, Part 1[148]
  • How Spotify Works, Part 2[149]

Open allocation works inside companies similarly to the way it works outside a company structure, with a few exceptions. While companywide rank doesn’t determine project allocations, it is often a factor in compensation.

“Responsive Organization” is a movement anchored by Microsoft to adopt open allocation style organizational design inside itself and Yammer, the corporate messageboard system it acquired in 2012.[150] Consultancies have emerged specializing in “organization design” and the transition to Responsive team structure.

Ultimately, attempts at creating “ideal engineering conditions” inside a corporation may only last as long as the company is comfortably situated in their category. Google began its life with a version of open allocation governance known as “20 percent time,” but later eliminated it when the company grew and adopted stack ranking.[151]

Broader study reveals power is not truly migrating to the “makers” in most companies. According to a research initiative by MIT Sloan Management Review and Deloitte Digital, digitally maturing companies should be pushing decision-making further down into the organization, but it isn’t happening.[152] Respondents in that study said they wanted to continually develop their skills, but that they received no support from their employer to get new training.

This finding mirrors the aforementioned MIT study on the motivations of open source contributors, which found that programmers enjoyed working on open source projects because it was a path to developing new, durable, and useful skills, at their own volition.[153]


In this section we introduced hacker culture and its approach to creating software around a specific set of design principles and values. We’ve shown how hacker culture developed an organizational pattern, and we have suggested that these patterns have made computer software more accessible to non-professional and non-academic people, undermining the social divisions created by strict licensing and closed-source code. We’ve demonstrated the success of the free and open source approach at the foundational level, with software such as Linux and Apache.

Finally, we have shown the ways commercial software companies have tried to mimic the open allocation ways of working. With free and open source software, the hacker movement effectively destroyed the institutional monopoly on research and development.[154] In the next section, we’ll learn how exactly their organizational patterns work, and how Bitcoin was built to improve them.

Section IISection IV

Foot Notes

[101] “What Is Free Software and Why Is It so Important for Society?” Free Software Foundation, https://iterative.tools/2pAnnUf.

[102] Ibid.

[103] “The kernel is a computer program that is the core of a computer’s operating system, with complete control over everything in the system. On most systems, it is one of the first programs loaded on start-up (after the bootloader). It handles the rest of start-up as well as input/output requests from software, translating them into data-processing instructions for the central processing unit. It handles memory and peripherals like keyboards, monitors, printers, and speakers.” “Kernel,” Wikipedia. https://iterative.tools/2A0yQlQ.

[104] “Facebook–Cambridge Analytica Data Scandal.” Wikipedia. August 28, 2018. https://en.wikipedia.org/wiki/Facebook–Cambridge_Analytica_data_scandal.

[105] “Facebook says 50 million accounts were breached,” USA Today, September 28, 2018. https://iterative.tools/2NvhxNb.

[106] “GNU Manifesto,” GNU.org, https://iterative.tools/2DtyMyU.

[107] Ibid.

[108] Waller, William Jr., “The Evolution of the Veblenian Dichotomy: Veblen, Hamilton, Ayres, and Foster,” Journal of Economic Issues, Vol. XVI, No. 3, Sept. 1982. https://iterative.tools/2LPDjdL.

[109] Raymond, Eric, “The Cathedral & the Bazaar,” Snowball Publishing, 1997. https://iterative.tools/2QTJIbE.

[110] Hill, Benjamin. “To Fork or Not to Fork,” https://iterative.tools/2wAHsMY, 2005.

[111] “All Watched Over by Machines of Loving Grace.” Wikimedia Foundation, 2018, https://iterative.tools/2CxCXs6.

[112] Madrigal, Alexis C. “Weekend Poem: All Watched Over by Machines of Loving Grace.” The Atlantic, Atlantic Media Company, 19 Sept. 2011, https://iterative.tools/2CgGY4W.

[113] “All Watched Over by Machines of Loving Grace, Episode 1: Love and Power.” Vimeo, 29 July 2018, https://iterative.tools/2NBUAgu.

[114] “Cyberspace and Power,” BBC, produced by Adam Curtis, 2017. https://iterative.tools/2xD4J2q.

[115] “Hypernormalisation,” BBC, produced by Adam Curtis, 2016. https://iterative.tools/2zsX6gb.

[116] Barbrook, Richard, “The Californian Ideology,” 1995, https://iterative.tools/2N8BzBh.

[117] Barlow, John Perry, https://en.wikipedia.org/wiki/John_Perry_Barlow, Wikipedia.

[118] “Hypernormalisation,” BBC, produced by Adam Curtis, 2016. https://iterative.tools/2zsX6gb.

[119] May, Timothy. “The Crypto Anarchist Manifesto,” Satoshi Nakamoto Institute, 1992, https://iterative.tools/2OWlR9G.

[120] Bernstein vs United States, Wikipedia, https://iterative.tools/2Ihlapg.

[121] Narayanan, Arvind. “What Happened to the Crypto Dream?” IEEE Security & Privacy, vol. 11, no. 3, 2013, pp. 68–71. https://iterative.tools/2QUmRNb.

[122] Ibid.

[123] Cypherpunk, Wikimedia, https://iterative.tools/2wGBABZ.

[124] Manne, Robert. “The Cypherpunk Revolutionary.” The Monthly. December 09, 2015. https://iterative.tools/2O9XJDs.

[125] Ibid.

[126] Manne, Robert. “The Cypherpunk Revolutionary.” The Monthly. December 09, 2015. https://iterative.tools/2O9XJDs.

[127] Rogway, Philip, “The Moral Quality of Cryptographic Work,” 2015, https://iterative.tools/2QTdflX.

[128] Dannen, Chris, “Inside Github’s Super-Lean Management Strategy and How It Drives Innovation,” Fast Company, https://iterative.tools/2NcHGo8, 2013.

[129] “Scope creep,” Wikipedia, https://en.wikipedia.org/wiki/Scope_creep.

[130] Soderberg, page 2.

[131] “The Human Cost of Tech Debt,” DaedTech, https://iterative.tools/2IdylHI.

[132] “Anti-Pattern,” Wikipedia. Wikimedia Foundation, August 29, 2018. https://en.wikipedia.org/wiki/Anti-pattern.

[133] Church, Michael, “What Is Open Allocation?”, https://iterative.tools/2xBeVIq.

[134] Karim R. Lakhani and Robert G Wolf, MIT Sloan School of Management and The Boston Consulting Group, “Why Hackers Do What They Do: Understanding Motivation and Effort in Free/Open Source Software Projects,” edited by J. Feller, B. Fitzgerald, S. Hissam, and K. R. Lakhani, MIT Press, 2005. https://iterative.tools/2NKeOQT.

[135] “Should We Celebrate the Anniversary of Open Source?” Open Source Initiative, https://iterative.tools/2xM1HIr.

[136] “Open Source Misses the Point,” GNU.org, https://iterative.tools/2zs6MaN

[137] Candeub, Adam. “Will Microsoft’s Embrace Smother GitHub?” The Wall Street Journal, Dow Jones & Company, June 24, 2018. https://iterative.tools/2Q1Omnq.

[138] “Ballmer: Linux is a cancer,” The Register, June 2, 2001, https://iterative.tools/2IfzzSy.

[139] “History: The Agile Manifesto.” Agilemanifesto.org, https://iterative.tools/2Q4Y9sd.

[140] Finley, Klint. “Linux Took Over the Web. Now, It’s Taking Over the World.” Wired, June 3, 2017. https://iterative.tools/2xALuGM.

[141] “How Bloomberg Integrated Learning-to-Rank into Apache Solr.” Tech At Bloomberg, January 23, 2017. https://iterative.tools/2MOeQuY.

[142] “What Are the Differences between Mac OS and Linux?” Ask Ubuntu, https://iterative.tools/2NGclqB.

[143] Hayden James. “81% of All Smartphones Are Powered by Linux,” HaydenJames.com, July 19, 2018. https://iterative.tools/2O9kHuB.

[144] Vaughan-Nichols, Steven J. “​Linux Is under Your Hood.” ZDNet, April 11, 2018. https://iterative.tools/2ztfndl.

[145] “Android Auto Coming to Audi Dashboards, No Phone Required.” Android Authority. Android Authority, May 25, 2017. https://iterative.tools/2N9TIi0.

[146] Vaughan-Nichols, Steven J. “​Tesla Starts to Release Its Cars’ Open-Source Linux Software Code.” ZDNet. ZDNet, May 30, 2018. https://iterative.tools/2wMZkVl.

[147] Soderberg, page 15.

[148] How Spotify Works, Part 1: https://iterative.tools/spotify1.

[149] How Spotify Works, Part 2: https://iterative.tools/spotify2.

[150] “Six FAQs about developing a responsive organization,” Microsoft.com, https://iterative.tools/2wFtgCE

[151] Mims, Christopher. “Google Engineers Insist 20% Time Is Not Dead-It’s Just Turned into 120% Time,” Quartz, August 20, 2013. https://iterative.tools/2Q4YNWF.

[152] Low, Jonathan. “How Digital Maturation Is Transforming Organizational Leadership.” The Low-Down, https://iterative.tools/2Dusb7f.

[153] Lakhani and Wolf, “Why Hackers Do What They Do: Understanding Motivation and Effort in Free/Open Source Software Projects.” https://iterative.tools/2NKeOQT.

[154] Soderberg, page 4.


Section IV

Human Consensus In Cryptocurrency Networks

How Bitcoin coordinates work amongst disparate groups of human volunteers

“[To attract a large, engaged volunteer base], you must start off with something that people are passionate about. It’s surprising that it’s a piece of software… I never thought I’d see this kind of enthusiasm about a piece of software, but it’s there. It turns out that when you look at the web, it’s a billion people online all looking through this window that you made, and they’re going to have opinions about it.”

—Asa Dotzler, Mozilla’s Director of Community Development, re: Firefox, 2009[155]

In this section we explore how the World Wide Web brought hackers together on message-boards and email chains, where they began to organize. We look at their ambition to a build private networks, and how they staked out requirements to build such a network using the lessons learned in earlier decades.

Hackers begin developing “free” software

So far we have argued that free open source software is the right medium for digital infrastructure, because its processes discourage spurious, ceremonial, expensive, and monotechnic developments. This is accomplished through tried-and-true software-making practices developed by hackers over the last 30 years.

In this section, we will discuss how Satoshi Nakamoto innovated on top of existing open allocation governance processes in order to make them robust enough to govern a currency system.

The fundamental challenge of any social system is that people are inclined to break the rules when it’s profitable and expedient. Unlike present-day financial systems, which are hemmed in by laws and conventions, the Bitcoin system formalizes human rules into a software network. But how does the system prevent human engineers from changing this system over time to benefit themselves?

Nakamoto’s solution to this question can be broken down into three parts:

  1. Make all participants “administrators” of the system, with no central controller.
  2. Require most or many participants to agree to any necessary rule changes.
  3. Make colluding to change the rules extremely expensive to attempt.

These solutions are nice in theory, but it’s important to remember that Nakamoto sought to enforce these rules upon human participants by using a software system. Prior to the release of Bitcoin, doing so would have run up against two specific unsolved engineering challenges:

  1. How can a system with many different computers maintain a database of transactions, without the use of a central coordinating computer? (In such a system, anyone with access to the central coordinating computer could change the rules in the system for their own benefit.)
  2. How do all the different administrators agree that the database was not, in fact, altered? (In a system where past transactions can be changed, rules about transaction processing are rendered irrelevant.)

Pioneering work that led to Bitcoin

A financial system with the aforementioned attributes is not a new concept. Ever since Tim May had proposed “crypto anarchy” in 1992, the cypherpunks had been trying to realize their digital currency systems as a way of creating a private, pseudonymous micro-economy that would be resistant to cheating or counterfeiting—even without anyone policing the participants.

Bitcoin was not the first attempt at digital money. Indeed, the idea was pioneered by David Chaum in 1983. In Chaum’s model, a central server prevented double-spending, but this was problematic:

“The requirement for a central server became the Achilles’ heel of digital cash. While it is possible to distribute this single point of failure by replacing the central server’s signature with a threshold signature of several signers, it is important for auditability that the signers be distinct 10 and identifiable. This still leaves the system vulnerable to failure, since each signer can fail, or be made to fail, one by one.”[156]

Digicash was another example of a currency that failed due to regulatory requirements placed on its central authority; it was clear that the necessity to police the owners of the system significantly undermined the efficiencies gained by the digitization of a currency system.[157]

Cypherpunk Wei Dei was directly influenced by crypto-anarchy when he came up with his decentralized “B-money” proposal in 1998. “I am fascinated by Tim May’s cryptoanarchy,” he writes in the introduction to his essay:[158]

“Unlike the communities traditionally associated with the word ‘anarchy,’ in a crypto-anarchy the government is not temporarily destroyed but permanently forbidden and permanently unnecessary. It’s a community where the threat of violence is impotent because violence is impossible, and violence is impossible because its participants cannot be linked to their true names or physical locations.”

Dai’s concept was based on recent developments in computer science which suggested that such a system might be feasible.

Prior art

As of the early 2000s, recent innovations had made Wei Dai’s B-money concept possible.[159] Scott Stornetta and Stuart Haber had proposed something called “linked timestamping” in 1990 to build a trusted chain of digital signatures which could be used to notarize and timestamp a document, preventing retroactive tampering.[160] In 1997, Adam Back invented Hashcash, a denial of service protection for P2P networks, which would make it expensive and difficult for participants to collude to alter past transactions.[161]

Still, participants might collude to break the rules in other ways, such as to counterfeit coins. Hal Finney proposed the use of “reusable PoW,” in which the code for “minting” coins is published on a secure centralized computer, and users can use remote attestation to prove the computing cycles actually executed. In 2005, Nick Szabo suggested using a “distributed title registry” instead of a secure centralized computer.[162]

In early 2009, Satoshi Nakamoto released the first implementation of a peer-to-peer electronic cash system, wherein the central server’s signature of authority was replaced by a decentralized “Proof-of-Work” system.[163] Nakamoto wrote after launch that “Bitcoin is an implementation of Wei Dai’s b-money proposal on Cypherpunks in 1998, and Nick Szabo’s Bitgold proposal.”[164]

How Bitcoin works, briefly

Well-written tutorials about “how Bitcoin works” are plentiful.[165] Instead of reproducing those explanations, the following paragraphs explain only what is required to understand the design rationale of the system, as a way of elucidating its purpose. Specifically, we will explore the incentive system, which keeps Bitcoin’s contributors working together in lieu of any formal association.

Central to the Bitcoin system is the concept of “mining,” which will be explained in greater depth in the next section. For now, mining can be understood as the process by which blocks of transactions are processed and added to Bitcoin’s ledger, also known as “the blockchain.” “Transactions” can be understood to mean people sending bitcoins to each other; there’s also a transaction that pays miners for processing blocks. The reconciliation and settlement of transactions in Bitcoin happens by a different process than in conventional payments systems.

How users agree on which network is “Bitcoin”

Many users only experience Bitcoin transactions through a lightweight “wallet” application on a mobile phone. Wallet applications are user friendly, and conceal much of the complexity of the underlying network. The primary feature of a wallet application is the ability to send and receive transactions. Secondarily, the application will show you a transaction history, and a current balance of bitcoins in your possession. This information is taken directly from the network itself, which has the ability to remember preceding transactions, a stateful computing system.

Bitcoin is not exactly stateful the way your smartphone or computer is. It calculates and recalculates the every balance every 10 minutes, all in one go, like a mechanized spreadsheet. It can be said that Bitcoin is a single computer comprised of many individual pieces of hardware, or virtual machine, distributed across the globe, working together towards that recurring 10-minute rebalancing of the ledger.

These machines can be sure they are connecting to the same network because they are using a network protocol, or a set of machine instructions built into the Bitcoin software. It is often said that Bitcoin is “not connected to the World Wide Web,” because it does not communicate using the HTTP protocol like Web browsers do.

While it’s true that Bitcoin is not a “Web application” like Facebook or Twitter, it does use the same underlying Internet infrastructure as the Web. The “Internet protocol suite” emerged as a DARPA-funded project at Stanford University between 1973 and 1974. It was made a military standard by the US Department of Defense in 1982, and corporations like AT&T and IBM began using it in 1984.[166] [167]

Figure 5: The layers of the Internet protocol suite.
(Credit: Wikimedia)

Within this application layer exists not just the World Wide Web, but also the SMTP email protocol, FTP for file transfer, SSH for secure direct connections to other machines, and various others—including Bitcoin and other cryptocurrency networks. We’ve said that free software like Bitcoin can be copied and re-deployed by anyone, so how can disparate versions not interfere?

In practice, they do, to some extent. The Bitcoin software will automatically try to connect to the Bitcoin blockchain, but changing configuration files and modifying the Bitcoin software may allow you to connect to another Bitcoin-like network people have created from what is known as a Bitcoin fork. Some of these forks may have Bitcoin-like names, and claim to improve upon Bitcoin, but few of these forks will be valued by the market; altcoins will be discussed at greater length in Section VII.

Figure 6. Where Bitcoin sits in the Internet Protocol suite.

With a traditional debit or credit card, any financial activity you conduct over the Internet is recorded within your “account,” stored on the card issuer’s central computer or cloud. There are no accounts in Bitcoin. Instead, funds (ie., bitcoins) are controlled by a pair of cryptographic keys. Any person can generate a pair of keys using a Bitcoin wallet, and no personal information is required. Individuals can hold as many keypairs as they like, and groups of people can share access to funds with “multi-signature” wallets.

As we will see, wallet-users are just one group of stakeholders in the Bitcoin network. Software for technical users also exists in several forms; it can be downloaded directly from the Bitcoin code repository, from your Terminal (in macOS or Linux).[168]

Users who run and store the full transaction history of the network on their computer will see it occupy about 200GB. Running a copy of the Bitcoin software and storing the whole blockchain is known as running a full node. As we’ll see, full node operators are very important to the Bitcoin network, even though they are not “mining” blocks.

Once the Bitcoin software is installed on your Internet-connected phone or computer, you can send and receive Bitcoin transactions to anyone else in the world, for any arbitrary quantity. Sending Bitcoins incurs a small fee, which is paid to miners.

Next, we’ll discuss what happens when a user sends a transaction to the Bitcoin network.

How the system knows who is who

Sending transactions on the Bitcoin network modifies the state of the ledger, the blockchain. In order to hold Bitcoin and make transactions, the user must first generate a pair of cryptographic keys, also known as a keypair. Keys are used to digitally sign data without encrypting it.

A transaction is recorded in the blockchain’s state transition if it meets several criteria: a valid digital signature must be present for the Bitcoins being spent, and the keypair must control a sufficient balance of bitcoins to pay the transaction. Below the full anatomy of a Bitcoin transaction:

Figure 7. Anatomy of a Bitcoin transaction. The transaction ID appears in yellow. Metadata appears in the blue bracket. Transaction inputs, in orange, are already owned by the account sending the transaction, and are used to fund it. The outputs, in green, are the outputs: Bitcoins being transferred to another account. If the available outputs exceed the desired transaction amount, then “change” is returned to the sender in the form of unspent outputs. These unspent transaction outputs are also called “UTXOs.”
(Credit: Venzen at Mail.bihthai.net)

General ledgers have been in use in accounting for 1,000 years, and many good primers exist on double-entry accounting and ledger-balancing.[169] [170] Bitcoin can be thought of as “triple-entry” accounting: both counterparties in a given transaction have a record of it in their ledger, and the network also has a copy of everyone’s transactions. This comprehensive history of every Bitcoin transaction ever is stored redundantly on every single full node. This is the 200GB of data you download when you store the blockchain.

Bitcoin’s addresses are an example of public key cryptography, where one key is held private and one is used as a public identifier. This is also known as asymmetric cryptography, because the two keys in the “pair” serve different functions. In Bitcoin, keypairs are derived using the ECDSA algorithm.

Figure 8: Visual representation of a user Alice signing a message and giving it to Bob, who can verify its sender using Alice’s public key, which she has provided earlier. Many PGP users attach their public key to all email correspondence, or list it on their personal homepage.
(Credit: Wikimedia)

  • Public keys are shared publicly, like an email address. When sending bitcoin to a counterparty, their public key can be considered the “destination.”
  • Private keys are kept secret. Gaining access to the funds held by a public key requires the corresponding private key. Unlike an email password, however, if the private key is lost, access to funds are lost. In Bitcoin, once the private key is generated, it is not stored in any central location by default. Thus, it is up to the user alone to record and retrieve it.

The use of public key cryptography is one of the relatively recent military innovations that make Bitcoin possible; it was developed secretly in 1970 by British intelligence, before being re-invented publicly in 1976.[171]

In Bitcoin, these digital signatures identify digitally-signed transaction data as coming from the expected public key. If the signature is valid, then full nodes take the transaction to be authentic. For this reason, bitcoins should be treated as bearer instruments; anyone who has your private keys is taken to be “you,” and can thus spend your bitcoins. Private keys should be carefully guarded.


Where transactions are processed

The Bitcoin network requires every transaction to be signed by the sender’s private key: this is how the network knows the transaction is real, and should be included in a block. Most users will store their private key in a special software application called a “cryptocurrency wallet.” This wallet ideally allows users to safely access their private key, in order to send and receive transactions through the Bitcoin network. Without a wallet application, one must send and receive transactions in the command-line Bitcoin software, which is inconvenient for non-technical users.

When a wallet application (or full node) submits a transaction to the network, it is picked up by nearby full nodes running the Bitcoin software, and propagated to the rest of the nodes on the network. Each full node validates the digital signature itself before passing the transaction on to other nodes.

Because transactions are processed redundantly on all nodes, each individual node is in a good position to identify fake transactions, and will not propagate them. Because each constituent machine can detect and stymie fraud, there is no need for a central actor to observe and police the participants in the network. Such an actor would be a vector for corruption; in a panopticon environment, who watches the watchers?

Thus it follows that Bitcoin transactions have the following desirable qualities:

  • Permissionless and pseudonymous. Anyone can download the Bitcoin software, create a keypair, and receive Bitcoins. Your public key is your identity in the Bitcoin system.
  • Minimal trust required. By running your own full node, you can be sure the transaction history you’re looking at is correct. When operating a full node, it is not necessary to “trust” a wallet application developer’s copy of the blockchain.
  • Highly available. The Bitcoin network is always open and has run continuously since launch with 99.99260 percent uptime.[172]

Bitcoin’s “minimal trust” is especially visible in its automated monetary policy: the number of bitcoins ever to be produced by the system is fixed and emitted at regular intervals. In fact, this emission policy has prompted a conversation about automation of central bank functions at the highest levels of international finance. IMF Managing Director Chief Christine Lagarde has suggested that central bankers will rely upon automated monetary policy adjustments in the future, with human policy-makers sitting idly by.[173] Nakamoto wrote that this was the only way to restrain medancious or incompetent market participants from convincing the bank to print money:[174]

“The root problem with conventional currency is all the trust that’s required to make it work. The central bank must be trusted not to debase the currency, but the history of fiat currencies is full of breaches of that trust. Banks must be trusted to hold our money and transfer it electronically, but they lend it out in waves of credit bubbles with barely a fraction in reserve.”

Nakamoto’s system automates the central banker, and abstracts the duties the overall maintainers of the systems. If those maintainers someday decide that more bitcoins must be created, they must change the software running on a vast plurality of machines which operate on the Bitcoin network, which are owned by many different people, dispersed globally. A difficult political proposition, if only because bitcoins are divisible to eight decimal places.

Management within open allocation projects

In the last section, we encountered “open allocation” governance, wherein a loose group of volunteers collaborates on a project without any official leadership or formal association. We saw how it was used effectively to build “free” and open source software programs which, in the most critical cases, proved to be superior products to the ones made by commercial software companies.

So far, our presentation of open allocation governance and hacker culture has presented as an Edenic ideal where everyone works on what they like, without the hassle of a boss. Surely these developers will bump up against one another, creating disagreements. Surely there is accountability. How does a “leaderless” group actually resolve conflict?

The truth is that open allocation projects do require management, but it’s far less visible, and it happens behind the scenes, through a fairly diffuse and cooperative effort. The goal of this form of group management is to make the project a fun and interesting environment that developers want to return to.[175]

Operational health and survivability

First, it’s important to note that not all conflict is bad—some is generative, and results in better code. Sometimes many epic email threads must be exchanged before parties come into alignment.

But in order to distinguish undesirable conflict from spirited brainstorming, we must first define “success” in an open allocation project context. Mere technical success—building a thing which achieves adoption—is certainly important at the outset of a project. But within a short time, the needs of users will evolve, as will the programmer’s understanding of the user and their goals. An inability to refactor or improve code over time will mean degraded performance and dissatisfaction, and the user base will eventually leave. Continuous maintenance and reassessment are the only way for initial success to continue into growth. Therefore, a regular and robust group of developers needs to be available and committed to the project, even if the founding members of the project leave.

The indicators for long-term and meaningful success can be evaluated in a single trait:

Operational health. The operational health of an open allocation project can be said to be the ease with which it integrates new code contributions or new developers.[176] Good operational health is considered a sign of project survivability. Survivability can be defined as the project’s ability to exist and be maintained independent of outside sponsorship or any individual contributor.[177]

Forms of governance in open allocation

Groups working open allocation may vary in the ways they plan work and resolve conflict. Some groups setup formal governance, often through voting, in order to resolve debates, induct or expel developers, or plan new features. Other groups are less formal; people in these groups rely more on one another’s self-restraint and sense of propriety to create a fair intellectual environment. Still, a few nasty or mischievous contributors can ruin a project.

In some projects, a benevolent dictator or “BD” emerges who has the authority to make important decisions about the software or the group. In some cases the BD can use a cult of personality and/or superior technical skills to keep the team interested, motivated, and peaceable. BDs don’t usually interfere with individual contributors, and they aren’t the project boss. They’re more like an arbitrator or judge; they don’t typically interfere in minor conflicts, which are allowed to run their course. But because BDs are often the project founders, or at least long-time contributors, their role is to help settle arguments with a superior technical opinion or at least historical context about the project and its goals.

It is not necessary for the BD to have the strongest engineering skills of the group; instead, it’s more critical that the BD have design sense, which will allow them to recognize contributions which show a high level of reasoning and skill in the contributor. In many cases, settling an argument is a matter of determining which party has the strongest understanding of the problem being solved, and the most sound approach to solving it.[178] BDs are especially useful when a project is fairly young and still finding its long-term direction.

Mature projects tend to rely less on BDs. Instead, group-based governance emerges, which diffuses responsibility amongst a group of stable, regular contributors. Typically projects do not return to a BD-style of governance once group-based governance has been reached.[179]

Emergent consensus-based democracy

Most of the time, an open allocation group without a BD will work by consensus, whereby an issue is discussed until everyone willingly reaches an agreement that all parties are willing to accept. Once no dissent remains, the topic of discussion becomes how to best implement the agreed-upon solution.

This form of governance is lightweight, blending the actual technical discussion itself with the decision-making process. Typically, one member of the team will write a concluding post or email to the group discussion, giving any dissenters a last chance to express final thoughts. Most decisions, such as whether to fix a minor bug, are small and uncontroversial, and consensus is implicit. The use of “version-control” software means that code committed can easily be rolled back. This gives social consensus a fairly relaxed and low-stakes feel. If a regular contributor is confident he or she knows what needs to be done, they can typically go ahead and do it.

Sometimes, however, consensus is not easily reached, and a vote is required. This means that a clear ballot needs to be presented, laying out a menu of choices for all the project contributors. Like in the consensus process, the discussion of the ballot options is often enmeshed with the technical discussion. So-called honest brokers emerge who occasionally post summary updates for the contributors who are following the discussion from a distance.

The brokers are sometimes participants in the debate—they need not be above the issue—so long as they are accurately representing the views of each constituent group. If they are, then they can muster the credibility to call a vote. Typically those who already have “commit access,” meaning those people who have been given permission to write (or “commit”) code to the project repository are empowered to vote.

By the time a vote is called, there will be little debate about the legitimacy of the options on the ballot, however, obstructionists may try to filibuster.[180] These people are politely tolerated if concern seems sincere, but difficult people are typically asked to leave the project. Allowing or banning contributors is also a matter of voting, however this vote is typically conducted privately amongst existing contributors, rather than on a general project mailing list.[181] There are many voting systems, but they are mostly outside the scope of this essay.[182]

Forking the code

A defining feature of free, open source software is its permissive licensing. Anyone is allowed to copy the codebase and take it in a new direction. This is a critical enabler of open allocation, volunteer-based governance. It means a contributor can spend time and energy on a shared codebase, knowing that if the group priorities diverge from his or her own, they can fork the code and continue in their preferred direction.[183]

In practice, forking has high costs for complex codebases. Few developers are well-rounded enough (or have enough free time) to address and fix every nature of bug and feature that a project might contain.

Forkability puts limits on the powers of Benevolent Dictators.[184] Should they take the project in a direction that most contributors disagree with, it would be trivial for the majority to copy the codebase and continue on without the BD at all. This creates a strong motivation for the BD to adhere with the consensus of the group and “lead from behind.”

Open allocation governance in practice

A useful guide to open allocation governance in a real, successful project can be found in the Stanford Business School case study entitled “Mozilla: Scaling Through a Community of Volunteers.”[185] (One of the authors of the study, Professor Robert Sutton, is a regular critic of the abuses of hierarchical management, not only for its deleterious effects on workers, but also for its effects on managers themselves.[186])

According to Sutton and his co-authors, about 1,000 volunteers contributed code to Mozilla outside of a salaried job. Another 20,000 contributed to bug-reporting, a key facet of quality control. Work was contributed on a part-time basis, whenever volunteers found time; only 250 contributors were full time employees of Mozilla. The case study describes how this “chaordic system” works:

“Company management had little leverage over volunteers—they could not be fired, and their efforts could be redirected only if the volunteers wanted to do something different. The overall effort had to have some elements of organization—the basic design direction needed to be established, new modules needed to be consistent with the overall product vision, and decisions had to be made about which code to include in each new release. While community input might be helpful, at the end of the day specific decisions needed to be made. An open source environment could not succeed if it led to anarchy. [Chairman of the Mozilla Foundation John Lily] referred to the environment as a “chaordic system,” combining aspects of both chaos and order. He reflected on issues of leadership, and scaling, in an organization like Mozilla: ‘I think ‘leading a movement’ is a bit of an oxymoron. I think you try to move a movement. You try to get it going in a direction, and you try to make sure it doesn’t go too far off track.’”

The Bitcoin “business model” binds hackers together despite conflict

In many ways, the Bitcoin project is similar to forerunners like Mozilla. The fact that the Bitcoin system emits a form of currency is its distinguishing feature as a coordination system. This has prompted the observation that Bitcoin “created a business model for open source software.” [187] This analogy is useful in a broad sense, but the devil is in the details.

Financing—which in most technology startups would pay salaries—is not needed in a system where people want to work for free. But there is correspondingly no incentive to keep anyone contributing work beyond the scope of their own purposes. Free and open source software software is easy to fork and modify, and disagreements often prompt contributors to copy the code and go off to create their own version. Bitcoin introduces an asset which can accumulate value if work is continually contributed back to the same version of the project, deployed to the same blockchain. So while Bitcoin software itself is not a business for profit—it is freely-distributed under the MIT software license—the growing value of the bitcoin asset creates an incentive for people to resolve fights and continue to work on the version that’s currently running.

This is what is meant by a so-called business model: holding or mining the asset gives technologists an incentive to contribute continual work (and computing power) to the network, increasing its utility and value, and in return the network receives “free labor.” As Bitcoin-based financial services grow into feature parity with modern banks, and use of the coin expands, its value is perceived to be greater.

Other real-time gross settlement systems, such as the FedWire system operated by the Federal Reserve, transacting in Federal Reserve Notes, can be used as a basis for comparison (in terms of overhead costs, security, and flexibility) to the Bitcoin system, which uses bitcoins as the store of value, unit of account, and medium of exchange. Without the prospect of the improvement of the protocol, as compared to banking equivalents, there is little prospect of increasing the price of Bitcoin; in turn, a stagnant price reduces financial incentive for selfish individuals to keep contributing code and advancing the system.

However, the system must also protect against bad actors, who might try to sabotage the code or carry the project off the rails for some selfish end. Next, we will discuss the challenges with keeping a peer-to-peer network together, and how Bitcoin’s design creates solutions for both.

How developers organize in the Bitcoin network

We have described how open allocation software development works in detail, but we have not yet delved into the roles in the Bitcoin network. Here we describe how technologists join the network.

There are three groups of technical stakeholders, each with different skill sets and different incentives.

Group A: Miners

The primary role of mining is to ensure that all participants have a consistent view of the Bitcoin ledger. Because there is no central database, the log of all transactions rely on the computational power miners contribute to the network to be immutable and secure.

Miners operate special computer hardware devoted to a cryptocurrency network, and in turn receive a “reward” in the form of bitcoins. This is how Bitcoin and similar networks emit currency. The process of mining is explained in detail in the following pages, but it suffices to say that the activities of miners require IT skills including system administration and a strong understanding of networking. A background in electrical engineering is helpful if operating a large-scale mine, where the power infrastructure may be sophisticated.

Operating this computer hardware incurs an expense, first in the form of the hardware, and then in the form of electricity consumed by the hardware. Thus, miners must be confident that their cryptocurrency rewards will be valuable in the future before they will be willing to risk the capital to mine them. This confidence is typically rooted in the abilities and ideas of the core developers who build the software protocols the miners will follow. As time goes on however, the miners recoup their expenses and make a profit, and may lose interest in a given network.

Group B: Core Developers

Developers join cryptocurrency projects looking for personal satisfaction and skill development in a self-directed setting. If they’ve bought the coin, the developer may also be profit motivated, seeking to contribute development to make the value of the coin increase. Many developers simply want to contribute to an interesting, useful, and important project alongside great collaborators. In order to occupy this role, technologists need strong core programming skills. A college CS background is helpful, but plenty of cryptocurrency project contributors are self-taught hackers.

In any case, core developers incur very few monetary costs. Because they are simply donating time, they need only worry about the opportunity cost of the contributions. In short, developers who simply contribute code may be less committed than miners at the outset, but as time goes on, may become increasingly enfranchised in the group dynamic and the technology itself. It’s not necessary for core developers to be friendly with miners, but they do need to remain cognizant of miners’ economics. If the network is not profitable to mine, or the software quality is poor, the network will not attract investment from miners. Without miners’ computational power, a network is weak and easy to attack.

Group C: Full Node Operators

Running a “full node” means keeping a full copy of the blockchain locally on a computer, and running an instance of the Bitcoin daemon. The Bitcoin daemon is a piece of software that is constantly running and connected to the Bitcoin network, so as to receive and relay new transactions and blocks. It’s possible to use the daemon without downloading the whole chain.

For the full node operator, running the daemon and storing the chain, the benefit of dedicating hard drive space to the Bitcoin blockchain is “minimally trusted” transactions; that is, he or she can send and receive Bitcoin without needing to trust anyone else’s copy of the ledger, which might be contain errors or purposeful falsifications.

This might not seem practically for non-technical users, but in actuality, the Bitcoin software does the work of rejecting incorrect data. Technical users or developers building Bitcoin-related services can inspect or alter their own copy of the Bitcoin blockchain or software locally to understand how it works.

Other stakeholders benefit from the presence of full nodes in three ways. Full nodes:

  1. Validate digital signatures on transactions sent to the network. Thus, they are gatekeepers against fake transactions getting into the blockchain.
  2. Validate blocks produced by miners, enforcing rules on miners who (if malicious) may be motivated to collude and change the rules.
  3. Relaying blocks and transactions to other nodes.

Worth mentioning are also two primary groups of second-degree stakeholders:

  • Third Party Developers: build a cottage industry around the project, or use it for infrastructure in an application or service (ie., wallet developer, exchange operator, pool operator). These people frequently run full nodes to support services running on thin clients.
  • Wallet Users: an end-user who is sending and receiving cryptocurrency transactions. All stakeholders are typically wallet users if they hold the coin. Many wallets are light clients who trust a copy of the ledger stored by the Third Party Developer of the wallet.


We have examined the way in which the Bitcoin network creates an incentive system on top of free and open source software projects, for the makers of derivative works to contribute back to the original. How do these disparate actors bring their computers together to create a working peer to peer network? Now that we’ve discussed how human software developers come to consensus about the “rules” in peer to peer systems, we will explore how machines converge on a single “true” record of the transaction ledger, despite no “master copy” existing.

Section IIISection V

Foot Notes

[155] Hoyt, David; Sutton, Robert; Rao, Hayagreeva. “Mozilla: Scaling through a community of volunteers,” December, 2009. https://iterative.tools/2MYOMs4.

[156] “Enabling Blockchain Innovations with Pegged Sidechains,” by Adam Back, Matt Corallo, Luke Dashjr, Mark Friedenbach, Gregory Maxwell, Andrew Miller, Andrew Poelstra,Jorge Timón, and Pieter Wuille, Oct. 22, 2014. https://iterative.tools/2ztfBkH.

[157] “A Brief History of Digital Currencies,” Strategic International Management Academic Library, https://iterative.tools/2QZgbNZ.

[158] Dai, Wei. “BMoney,” 1998. http://www.weidai.com/bmoney.txt.

[159] “Bitcoin’s Academic Pedigree.” https://iterative.tools/2O8HZk9.

[160] Haber, Stuart and Stornetta, W. Scott. “How to Time-Stamp a Digital Document,” Journal of Cryptology, Vol. 3, No. 2, pp. 99, 1991. https://www.anf.es/pdf/Haber_Stornetta.pdf

[161] Back, Adam. “Hashcash,” 2003. http://hashcash.org/.

[162] Szabo, Nick. Bit gold, 2005. https://iterative.tools/2NFIW48.

[163] Adam Back, Matt Corallo, Luke Dashjr, Mark Friedenbach, Gregory Maxwell, Andrew Miller, Andrew Poelstra, Jorge Timón, and Pieter Wuille. “Enabling Blockchain Innovations with Pegged Sidechains,” Oct. 22, 2014. https://iterative.tools/2ztfBkH.

[164] Bitcointalk.org, https://iterative.tools/2OQyd3t.

[165] Bitcoin Wiki, https://en.bitcoin.it/wiki/Main_Page.

[166] “Encapsulation,” Wikipedia. https://iterative.tools/2xDS7rM.

[167] “Internet protocol suite,” Wikipedia. https://iterative.tools/2OP8WXn.

[168] Stop and Decrypt, Hackernoon. “A complete beginners guide to installing a Bitcoin Full Node on Linux (2018 Edition),” https://iterative.tools/2OPSYMy.

[169] “Double Accountry Booking System,” Wikipedia, https://iterative.tools/2xDk9DQ.

[170] “General Ledgers,” Investopedia. https://iterative.tools/2QRNtye.

[171] “Public Key Cryptography,” Wikipedia. https://iterative.tools/2DsoBdZ.

[172] “Bitcoin Uptime,” Bitcoinuptime.com.

[173] Lagarde, Christine, IMF, and Bank of England. “Central Banking and Fintech-A Brave New World?” IMF.org, March 2017. https://iterative.tools/2IhnxbG.

[174] “Bitcoin open source implementation of P2P currency,” P2P Foundation, February 11, 2009. https://iterative.tools/2NGsIYn.

[175] Fogel, Karl. “Producing Open Source Software,” O’Reilly Media, 2006. Page 4. https://iterative.tools/2zshkXn.

[176] Ibid, page 87.

[177] Ibid.

[178] Ibid, page 89-90.

[179] Ibid, page 91.

[180] Ibid, page 93.

[181] Ibid, page 141-142.

[182] “Electoral System,” Wikipedia. https://iterative.tools/2QWJmkC.

[183] Ibid, page 88.

[184] Ibid.

[185] Hoyt, David; Sutton, Robert; Rao, Hayagreeva. “Mozilla: Scaling through a community of volunteers,” December, 2009.

[186] Xing Qin, Mingpeng Huang, Russell E. Johnson, and Dong Ju. “The Short-Lived benefits of Abusive Supervisory Behavior for Bad Actors: An Investigation of Recovery and Work Engagement,” in The Academy of Management Journal, September 2017. https://iterative.tools/2ORkxF5.

[187] Ravikant, Naval. “The Bitcoin Model for Crowdfunding,” https://iterative.tools/2O7RlwK.


Section V

Machine Consensus Via Proof-of-Work

How does Bitcoin use a peer-to-peer network of computers to enforce the rules agreed upon by human participants?

“… Hardware is soft, a transient expression of ideas, and those ideas are more durable than the hardware itself.”

—Edward Ashford Lee, 2017[188]

In the last section, we discussed how hackers organize to create a system like Bitcoin, and established that the machines in the network are used to enforce rules upon the participants. But it can also be said that the machines enforce rules upon each other, such that clever humans are frustrated when trying to change them. This section explores how computers are used to keep human participants honest.

So far, we have contended that the “problems being solved” by Bitcoin are not abstractions (ie., “central banking” or “soft money”) but the concrete challenges of coordinating specialized human labor outside a command-and-control structure. We’ve established that the motivations for avoiding a command-and-control structure are threefold:

  1. To minimize the opportunity and motivation for the managers of the system to cheat or hassle the participants.
  2. To attract skilled technologists to build the system without direct compensation (ie., FOSS and open allocation).
  3. To eliminate gatekeeping, and allow anyone to use the system without permission; this achieves maximum growth and success of the software.

Next, we’ll talk about how Bitcoin accomplishes this feat of machine cooperation without losing these three desirable qualities.

How machines agree on a shared transaction history

Recall the first section, discussing Nakamoto’s message in the Genesis Block. About every 10 minutes, the system collates, validates, and bundles the new transactions. These bundles are called blocks. Block producers are called miners.

Each block contains a hash of the data from the previous block. A hash function is a one-way algorithm that maps data of arbitrary size to an output string of bits in a fixed size, called a hash. Changing the data fed into the hash function changes the resultant hash. It is one-way as it is not possible to reconstruct the data given the hash and the hash function. It follows that if a block contains a hash of the prior block, it must have been produced after the prior block existed. Since changing a block in the middle of a sequence of blocks would invalidate the hashes in all subsequent blocks, conceptually they are chained together. Blocks can only be appended to the end of the chain.

The data structure which results from creating a new block and including the hash of the prior block in a continuous manner is known as the blockchain. In a blockchain-based system all participants validate the hash of a new block before updating the state of their ledger.

How block producers are selected

We have established that all machines mining on the Bitcoin network work to bundle the transactions since the last block. If they are the first to report a new block, they have a chance at being paid a coinbase reward (currently 12.5 bitcoin).

But since most honest miners will report the same bundle of transactions, there will be many “correct” blocks, and only one reward winner. How does the system choose who wins, and how are clever miners prevented from winning every block?

Bitcoin’s consensus design selects a winner pseudo-randomly from among many potential miners by requiring the winning block to meet certain hard-to-predict characteristics. It is by requiring a certain number of prepended zeros in the block hash that the “reward winner” is kept random. This is what is meant when Bitcoin miners are described as playing a “guessing game.”[189]

The screenshot below is taken from a blockchain explorer, a free public service which allows anyone to see all Bitcoin transactions. Note the block hash with 18 prepended zeros, required by the difficulty factor at the time this block was mined:


Figure 9. The most recent block, as of the time of writing. Note the block hash (mentioned above) and block height, also known as the number of blocks since Nakamoto mined the Genesis block.
(Credit: Blockchain.info)

Satoshi Nakamoto set as a constant a 10 minute average block time. This average is maintained by adding or subtracting the number of prepended zeros required in a valid block hash. So while the Bitcoin system has no sense of “Earth time,” it does know when blocks are found too quickly or too slowly, and difficulty will adjust accordingly. For example if a large amount of hashrate left the network, making block production too slow, then the number of prepended zeros required to find a block would drop, making the validation condition easier to satisfy and blocks faster to find.

Unlike block #544937 above, block #0 below only has 10 prepended zeros. Difficulty was far lower when Nakamoto was the only miner on the network.


Once validation criteria are met, the lucky block is propagated around the network and accepted by each full node, and it gets appended to a chain of predecessor blocks; at this time the winning miner is also paid.

Minting bitcoins for block producers

Each time a block is produced and a miner is paid, new bitcoins come into existence. The computer which finds a lucky hash is paid a reward automatically by the network, in Bitcoin. This is called the coinbase reward . Like everyone else, miners must have a public key to receive these funds.

The coinbase reward is cut in half every 210,000 blocks, an event known as halving. Halvings make bitcoin a deflationary currency; eventually the emission rate of bitcoins will drop to zero. Only about 21 million will be created by the network. Miners are theoretically incentivized to continue mining after the reward period ends around the year 2140, because they will continue to receive transaction fees set by the sender of an individual transaction.

In this way, Bitcoin creates its currency through a distributed process, out of the hands of any individual person or group, and requiring intensive computing and power resources.

Turning energy into hashes crystallizes value

As more blocks gets added to the chain, the cost of reverting a past transaction increases, and hence probability of the transactions in the block being finalized increases. Proof-of-Work is cumulative in the sense that with more computing power on the network, it becomes more expensive to attack it, making the ledger more secure.[190]

In Bitcoin’s original whitepaper, Section IV “Proof-of-Work” is written as the following:[191]

“To implement a distributed timestamp server on a peer-to-peer basis, we will need to use a proof-of-work system… Once the CPU effort has been expended to make it satisfy the proof-of-work, the block cannot be changed without redoing the work. As later blocks are chained after it, the work to change the block would include redoing all the blocks after it.”

Conceptually, Proof-of-Work burns energy in block-issuance, which allows network participants to view immutability objectively.[192] Proof-of-Work reduces the entropy level within the system by consuming energy to create machine consensus around an ordered set of transactions. The cost of electricity consumption is borne collectively by miners to find “order” in “chaos” without a central coordinating agent.[193] This is the process through which physical resources (ie., energy) are transformed into digital resources in the form of blocks of transactions, and the coinbase rewards which are the outcome of block production. Because these digital assets (ie., blocks and transactions) are encoded on physical computer memory, it can be said that the Proof-of-Work process sublimates electricity into a physical bearer instrument, similar to the way that gold mining and minting can produce gold coins.

Blocks order transactions

We have said that Bitcoin hashes groups of transactions to create a single, verifiable block. We’ve also said that the blockchain creates a transaction history that cannot be changed without expending enormous amounts of energy. But accomplishing these two feats required some ingenuity on Nakamoto’s behalf.

Bitcoin users exist all over the world, and their individual transactions must travel slower than the speed of light, so latency causes nodes to receive messages at different times, or out of order.

In any financial system, errors in transaction-logging can create disagreements between parties because balances will appear incorrect, or transactions will be missing. If disagreements are constant, the system is not usable. Whether in a paper ledger or a digital database, cheaters or saboteurs who want to erroneously increase their own balance (or simply wreak havoc) need only to change the order of transactions (ie., their timestamp) or delete them outright to cheat other participants.

The practice of “writing” ledger data into a hard-to-alter physical record is at least 30,000 years old, as exemplified by the clay tablets used by the ancient Sumerians used before the development of paper, and the more recent wooden “tally sticks” (seen below) which were still legal tender in the United Kingdom until the 19th century.[194]

Figure 10. Medieval tally sticks, notched and carved to record a debt on 32 head of sheep, owed to a local dean in Hampshire, England.
(Credit: Wikimedia)

Of course, keeping track of changes is no sweat for a spreadsheet on a single computer. When applications span multiple computers, networks are required to carry messages between them. Multi-computer applications deal with slow connections by using asynchronous algorithms, which are tolerant of dropped, latent, or out-of-order messages and are not driven by a time-based schedule.[195] In an asynchronous system, computers engage in parallel processing, but without moving forward in lock-step. Instead, messages (often user actions) trigger a change on each and every machine as it hears about the message.

Nakamoto consensus is highly reliable

Bitcoin too is an asynchronous event-driven system. But unlike conventional distributed systems, participants are not permissioned, meaning they have not been authenticated and authorized prior to participating. Yet somehow they all transition the state of their ledger together without a leader or any sort of coordinating mechanism beyond their own self interest. How can self-interest be used to coordinate a group of disparate, unvetted, and possibly hostile individuals?

One of the many strokes of brilliance in Bitcoin is the use of economic incentives to keep miners producing valid blocks on schedule. Miners earn rewards denominated in the unit of account for the ledger they maintain; that is, in bitcoin. Nakamoto’s conjecture was that the desire to corrupt the ledger, which threatens the coin of the realm, would be outweighed by the desires of those with a vested interest.

This way, miners in a distributed system like Bitcoin can come to agreement about the order of transactions, even if some of the nodes are slow or even maliciously producing invalid blocks. This happens without the restrictive requirements of permissioned consensus.

Bitcoin’s system has shown its resilience in both operational uptime and integrity of the ledger. Importantly, it can accomplish this feat without needing to vet the individual nodes on the network; machines can join or drop off at will, and the properties of the system remain the same.

Industrial mining in a nutshell

Compared to launching an ICO, venture investing, or volatility-trading, a mining operation is the least exposed to capital market “narratives,” making it the most predictable cryptocurrency investment activity. Mining profitability is driven by semiconductor cycles, energy expenditure, and the overall performance of the cryptocurrency market. While a mining investment is fundamentally a long position, it comes with a lower cost basis, so long as a miner optimizes for overhead costs and buys their machines at a fair retail price. A miner’s decisions to buy hardware or support a given network are much less influenced by short term market fashions than on the fundamental qualities of the network protocol, and the technological life cycle of hardware being purchased. Considerations for miners include, but are not limited to, fundamental factors such as:

  • Choosing a viable network.
  • Sourcing from the right hardware manufacturers, at a fair price.
  • Timing the purchase with the hardware cycle.
  • Cost of energy and other overheads at host facility.
  • Security and staffing at host facility.
  • Liquid reward management.
  • Local regulation and tax.

There are two main main factors driving mining market dynamics: hashrate growth and price movement. Fundamentally the two factors are deeply intertwined. Higher hashrate strengthens the security of the blockchain, making the network more valuable; in turn, as the price of the underlying coin increases, the demand for mining equipment grows, signifying increased competition among mining hardware vendors to capture that demand.

Bitcoin hashrate has been increasing at a breathless pace despite the spot price having been butchered year-to-date. Since January 2018, Bitcoin miners and traders have lived in completely separate universes, with miners reinvesting in hardware and facilities, anticipating the next cycle of price appreciation that is expected to accompany continued engineering progress at the core protocol level. Because miners control liquidity, this amounts to a self-fulfilling prophecy. (An appendix discussing popular conceptions about price trends appears at the end of this paper.)

Figure 11. Hashrate continues growing in spite of dropping bitcoin prices.
(Source: bitinfocharts.com)

The mismatch between hashrate growth and price movement is the result of the different paces between hardware markets and capital markets. Under normal circumstances, mining difficulty can be predicted by semiconductor foundry TSMC’s wafer shipments, which account for a majority of Bitcoin ASIC production. Foundry lead times are longer than the Bitcoin price cycle, meaning wafers that are already in production during a downturn in the Bitcoin price would cause capacity to overshoot.


Figure 12. TSMC wafer demand may decline given unsustainable mining profits.
(Source: Morgan Stanley Research)

On the other hand, due to the cumulative nature of Proof-of-Work, higher hashrate poured into a network makes the system more secure and robust. A higher degree of finality means the system is more stable to support transaction volume, and more robust for third-party developers to build on the system.

In Proof-of-Work cryptocurrencies, capital markets and distributed networks are tied together by design. As Bitcoin price continuously climbed up over the past decade, mining grew into a huge industry. In the first half of 2018, the largest cryptocurrency ASIC manufacturer Bitmain, reported $2.5 billion in revenue and $1.1 billion in profit.[196]


Figure 13. Bitcoin miner revenue over time. (Source: Frost & Sullivan)

The rise of specialized hardware

Over the years, cryptocurrency mining has graduated from CPU to GPU to specialized hardware such as FPGA (Field-Programmable Gate Array) and ASICs. Because of the competitive nature of mining, miners are incentivized to operate more efficient hardware even if it means higher upfront cost paid for these machines. As some hardware manufacturers upgrade to faster and more efficient machines, others are forced to upgrade too, and an arms race emerges. Today, for the notable networks, mining is largely dominated by ASICs. Bitcoin’s SHA256d is a relatively simple computation; the job of a Bitcoin ASIC is to apply the SHA256d hash function trillions of times per second, something that no other type of semiconductor can do.

First introduced in the 1980s, ASICs transformed the chip industry. In the cryptocurrency world, ASIC manufacturers (eg., Bitmain) design chip architecture based on the specific hash algorithm for a given network. After going through multiple iterations and tests, the design graphic for the photomask of the circuit is then sent to foundries such as TSMC and Samsung as part of the process known as a tape-out. The actual performance of the chips is not known until the chips return from the foundry. At this point, the ASIC manufacturer needs to optimize for thermal design and chip alignment on the hashing board before the product is ready for production use.

The rise of application-specific hardware is inevitable and a natural trend in the computing hardware evolution. Much like how technology in gold mining and oil drilling developed over time as the base commodities became more and more valuable, application-specific hardware is improving quickly as the result of cryptocurrency becoming more attractive. While short-term price action is mainly driven by speculation and has been observed to decorrelate with hashrate, over the long run the two factors form a virtuous feedback loop.[197]


Figure 14. Market size of the blockchain hardware market by revenues and growth rate globally, 2012-2020.
(Source: Frost & Sullivan) [198]


Figure 15. Market size of blockchain hardware market by revenues and growth rate in China, 2013-2020. (Source: Frost & Sullivan) [199]


Past, present, and future of ASIC manufacturing

A cryptocurrency miner is a heterogeneous computing system, which refers to systems using multiple types of processors. Heterogeneous computing is becoming more common as Moore’s Law slows down. Gordon Moore, originator of the eponymous law, predicted that transistor density in semiconductor manufacturing would produce continuous and predictable hardware improvements, but that these improvements had only 10-20 years before they reached fundamental physical limits.[200]

The first generation of Bitcoin ASICs included China’s ASICMiner, Sweden’s KNC, and Butterfly Labs and Cointerra in the U.S. Application-specific hardware quickly showed its promise. The first batch of ASICMiner hit the market in February 2013. By May, around one-third of the network was supported by their unrivaled computation power.

Integrated circuit competition is all about how quickly a company can iterate the product and achieve economies-of-scale. Without sufficient prior experience about hardware manufacturing, ASICMiner rapidly lost market share due to delay and a series of critical strategic mistakes.

Around the same time in 2013, Jihan Wu and Ketuan Zhan started Bitmain. In the early days of Bitcoin ASICs, simply improving upon the previous generation’s chip density, or tech node, offered an instant and efficient upgrade. Getting advanced tech nodes from foundries is always expensive, so the challenge was less about superior technical design, but more about the ability to fundraise. Shortly after the launch of Bitmain, the company rolled out the Antminer S1 using TSMC’s 55nm chip.

In 2014, the cryptocurrency market entered into a protracted bear market, with the price of Bitcoin dropping nearly 90 percent. By the time the market recovered in 2015, the Antminer S5 (Bitmain’s then-latest machine) was the only product available to meet the demand. Bitmain quickly established its dominance. Subsequently, the lead engineer from ASICMiner joined Bitmain as a contractor, and developed the S7 and S9. These two machines went on to become the most successful cryptocurrency ASIC products sold to date.

The semiconductor industry is fast-paced. Increased competition, innovations in production, and economies of scale mean the price of chips keep falling. For large ASIC mining companies to sustain their profit margins they must tirelessly seek incremental design improvements.

How the hardware game is changing

In the past, producing a faster generation of chips simply required placing transistors closer together on the chip substrate. The distance between transistors is measured in nanometers. As chip designers begin working with cutting-edge tech nodes with transistor distances as low as 7nm, the improvement in performance may not be proportional to the decrease in distance between transistors. Bitmain has reportedly tried to tape-out new Bitcoin ASIC chips at 16nm, 12nm, and 10nm as of March 2018. The tape-out of all these chips allegedly resulted in failure which cost the company almost 500 million dollars.[201]

After the bull run in 2017, many new original equipment manufacturers (OEMs) are entering the Bitcoin ASIC arena. While Bitmain is still the absolute leader in terms of size and product sales, the company is clearly lagging behind on performance of its core products. Innosilicon, Canaan, Bitfury, Whatsminer (started by the same engineer designed S7 and S9), and others are quickly catching up, compressing margins for all players.

As the pace of tech node improvement slows down, ASIC performance becomes increasingly dependent on the company’s architectural design skills. Having an experienced team to implement fully-custom chip design is therefore critical for ASIC manufacturers to succeed in the future. In the long term, ASIC design will become more open-source and accessible, leading to commoditization.


Figure 16. Mining hardware & mining difficulty
(Credit: “The evolution of bitcoin hardware”[202])

Bitcoin mining started out as a hobbyists’ activity which could be done on a laptop. From the chart above we can see the accelerating move to industrialized mining. Instead of running mining rigs in a garage or basement, industrialized mining groups, cloud mining providers, and hardware manufacturers themselves today build or renovate data-centers specifically tailored for cryptocurrency mining.[203] Massive facilities with thousands of machines are operating 24/7 in places with ample electricity, such as Sichuan, Inner Mongolia, Quebec, Canada, and Washington State in the U.S. [204]

In the cut-throat game of mining, a constant cycle of infrastructure upgrades requires operators to make deployment decisions quickly.[205] Industrial miners work directly with machine manufacturers on overclocking, maintenance, and replacements. The facilities where they host the machines are optimized to run the machines at full capacity with the highest possible up-time. [206] Large miners sign long-term contracts with otherwise obsolete power plants for cheap electricity. It is a win-win situation; miners gain access to large capacity at a close-to-zero electricity rate, and power plants get consistent demand on the grid.

Over time, cryptocurrency networks will behave like evolving organisms, seeking out cheap and under-utilized power, and increasing the utility of far-flung facilities that exist outside present-day industrial centers. Proof-of-Work cryptocurrencies depend on appending blocks to the chain to maintain consensus.

Over the years, many have voiced concern around the high amount of energy consumed in producing Bitcoin. Satoshi Nakamoto himself addressed this concern in 2010, saying:[207]

“It’s the same situation as gold and gold mining. The marginal cost of gold mining tends to stay near the price of gold. Gold mining is a waste, but that waste is far less than the utility of having gold available as a medium of exchange. I think the case will be the same for Bitcoin. The utility of the exchanges made possible by Bitcoin will far exceed the cost of electricity used. Therefore, not having Bitcoin would be the net waste.”

The “Delicate balance of terror” when miners rule

In a permissionless cryptocurrency system like Bitcoin, large miners are also potential attackers. Their cooperation with the network is predicated on profitability; should an attack become profitable, it’s likely that a large scale miner will attempt it. Those who follow the recent history of Bitcoin are aware that the topic of miner monopolies is controversial.[208]

Some participants believe ASICs are deleterious to the health of the network in various ways. In the case of hashrate concentration, the community is afraid of miners’ collective ability to wage what is known as a 51 percent attack, wherein a miner with the majority of hashrate can use this computing power to rewrite transactions or double-spend funds.[209] Such attacks are common in smaller networks, where the cost of achieving 51 percent of the hashrate is low.[210]

Any mining pool (or cartel of mining pools) with over 51 percent of the hashrate owns the “nuclear weapon” in the network, effectively holding the community hostage with raw hashrate.[211] This scenario is reminiscent of Cold War-era nuclear strategist Albert Wohlsetter’s notion of a delicate balance of terror:

“The balance is not automatic. First, since thermonuclear weapons give an enormous advantage to the aggressor, it takes great ingenuity and realism at any given level of nuclear technology to devise a stable equilibrium. And second, this technology itself is changing with fantastic speed. Deterrence will require an urgent and continuing effort.”

While large miners can theoretically initiate attacks that bends the consensus history to their likings, they also risk tipping off the market to their attack, causing a sudden collapse of the token price. Such a price collapse would render the miner’s hardware investment worthless, along with any previously-earned coins held long.[212] In the case where manufacturing is highly concentrated, clandestine 51 percent attacks are easier to achieve.[213]

Figure 17: Miner concentration by pool.
(Credit: blockchain.com).

In the past few years, Bitmain has dominated the market both in the form of hashrate concentration and manufacturing concentration. At the time of the writing, analysts at Sanford C. Bernstein & Co. estimate that Bitmain controls 85 percent of the market for cryptocurrency-mining chips.[214]

“Tyranny of Structurelessness” when core developers rule

While hostile miners pose a constant threat to permissionless cryptocurrency systems, the dominance of the core software developers can be just as detrimental to the integrity of the system. In a network controlled by a few elite technologists, spurious changes to the code may not be easily detectable by miners and full node operators running the code.

Communities have taken various approaches to counter miners’ overwhelming amount of influence. The team at Siacoin decided to manufacture its own ASIC miner upon learning of Bitmain’s Sia miner.[215] Communities such as Zcash take a cautiously welcoming attitude to ASICs.[216] New projects such as Grin designed the hashing algorithm to be RAM (Random Access Memory) intensive so that ASICs are more expensive to manufacture.[217] Some projects such as Monero have taken a much harsher stance, changing the hashing algorithm just to render one manufacturer’s ASIC machines inoperable.[218] The fundamental divide here is less about “decentralization” and more about which faction controls the means of producing coinbase rewards valued by the marketplace; it is a fight over control of the “golden goose.”[219]

Due to the highly dynamic nature of decentralized networks, to swiftly act against power concentration around miners could lead to the opposite extreme: power concentration around developer figureheads. Both types of concentration are equally dangerous. The latter extreme leads to a tyranny of structurelessness, wherein the community worships the primary committers in a cult of personality, and under a false premise that there is no formal power hierarchy. This term comes from social theorist Jo Freeman, who wrote in 1972:[220]

“As long as the structure of the group is informal, the rules of how decisions are made are known only to a few and awareness of power is limited to those who know the rules. Those who do not know the rules and are not chosen for initiation must remain in confusion, or suffer from paranoid delusions that something is happening of which they are not quite aware.”

A lack of formal structure becomes an invisible barrier for newcomer contributors. In a cryptocurrency context, this means that the open allocation governance system discussed in the last section may go awry, despite the incentive to add more development talent to the team (thus increasing project velocity and the value of the network).

Dominance of either miners or developers may results in changes to the development roadmap which may undermine the system. An example is the erroneous narrative perpetuated by “large block” miners.[221] The Bitcoin network eventually split into two on August 1, 2017 as some miners pushed for larger blocks, which would have increased the costs for full node operators, who play a crucial role in enforcing rules on a Proof-of-Work blockchain. Higher costs might mean fewer full node operators on the network, which in turn brings miners one step closer to upsetting the balance of power in their own favor.

Another example of imbalance would be Ethereum Foundation. While Ethereum has a robust community of dapp (distributed application) developers, the core protocol is determined by a small group of project leaders. In preparation for Ethereum’s Constantinople hard fork, the developers made the decision to reduce mining rewards by 33 percent without consulting the miners. Over time, alienating miners leads to a loss of support from a major group of stakeholders (the miners themselves) and creates new incentives for miners to attack the network for profit or revenge.[222]

Market consensus is achieved when humans and machines agree

So far we have discussed human consensus and machine consensus in the Bitcoin protocol. Achievement of these two forms of consensus leads to a third type, which we will call market consensus:[223]


Figure 18. Consensus in the marketplace results from human and machine consensus.
(Credit: Narayan et al., Bitcoin and Cryptocurrency Technologies, p.169)

The three legs are deeply intertwined, and they require each other for the whole system to work well. Many cryptocurrency projects including Bitcoin, have suffered from either a “delicate balance of terror” and/or “tyranny of structurelessness” at various times in their history; this is one source of the rapidly-changing perceptions of Bitcoin, and the subsequent price volatility. Can these oscillations between terror and tyranny be attenuated?

Attenuating the oscillation between terror and tyranny

Some projects have chosen to reduce the likelihood of a “delicate balance of terror” by resisting the participation of ASIC miners. A common approach is to modify the Proof-of-Work algorithm to require more RAM to compute the block hash; this effectively makes ASIC miners more expensive (and therefore riskier) to manufacture. However, this is a temporary measure, assuming the network grows and survives; as the underlying cryptocurrency becomes more valuable, manufacturers are incentivized to roll out these products, as evidenced in Zcash, Ethereum, and potentially the Grin/Mimblewimble project. [224]

Some think that mining centralization in Proof-of-Work systems is an ineluctable problem. Over the years there have been various proposals for different consensus protocols that do not involve mining or energy expenditure.[225] The most notable of these approaches is known as Proof-of-Stake.

Proof-of-Stake consensus is a poor alternative

While there are various way to implement Proof-of-Stake, an alternative consensus mechanism to Proof-of-Work, the core idea is that in order to produce a block, a miner has to prove that they own a certain amount of the network coins. In theory, holding the network asset reduces one’s incentive to undermine the network, because the value of one’s own positions will drop.

In practice, the Proof-of-Stake approach proves to be problematic in systems where the coins “at stake” were not created through Proof-of-Work. Prima facie, if coins are created out of thin air at no production cost, the value of one’s stake may not be a deterrent to a profitable attack.[226] This is called the “Nothing-at-Stake” critique.

So far in this section, we have not discussed other ways of producing coins besides Proof-of-Work mining. However, in some alternative cryptocurrency systems, it is possible to create pre-mined coins, at no cost, with no Proof-of-Work, before the main blockchain is launched. Projects such as Ethereum called for the pre-mining of a vast majority of the circulating supply of coins, which were sold to insiders at a fraction of miners’ cost of production. Combining a pre-mine with Proof-of-Work mining for later coins is not necessarily a dishonest practice, but if undisclosed, gives the erroneous impression that all coins in existence have a cost-of-production value. In this light, Ethereum’s stated transition to Proof-of-Stake should be viewed with some skepticism.

Fully dressing-down Proof-of-Stake consensus is beyond the scope of this essay, except to say that it is not a viable replacement for Proof-of-Work consensus mechanisms.[227] Some Proof-of-Stake implementations try to circumvent attack vectors with clever incentive schemes, such as in Ethereum’s yet-to-be-released Slasher mechanism.[228]

The critical fault of Proof-of-Stake systems is the source of pseudorandomness used to select block producers.[229] While in Proof-of-Work, randomizing the winner of block rewards is accomplished through the expenditure of a large amount of computing power and finding the correct block hash with the right number of prepended zeros, things work differently in Proof-of-Stake. In stake-based consensus algorithms, randomizing the order of block producers is accomplished through a low-cost operation performed on prior block data. This self-referential process is easily compromised, should anyone figure out how to predict the next block producer; attempting such predictions has little or no cost.[230]

In short, consensus on history built with Proof-of-Stake is not immutable, and is therefore not useful as the basis for a digital economy.[231] However, corporate or state-run projects may successfully deploy working Proof-of-Stake systems which limit attack vectors by requiring permission or payment to join the network; in this way, Proof-of-Stake systems are feasible, but will be slower-growing (owing to the need to vet participants) and more expensive to operate in practical terms (for the same reason, and owing to the need for security measures that wouldn’t otherwise be needed in a PoW system, which is expensive to attack).

The necessary exclusivity required for PoS to function limits its utility, and limits the growth potential of any network which relies upon PoS as its primary consensus mechanism. PoS networks will be undermined by cheaper, more reliable, more secure, and more accessible systems based on Proof-of-Work.

Proof-of-Stake as an abstraction layer on top of Proof-of-Work

Whether some form of Proof-of-Stake will ever replace Proof-of-Work as the predominant consensus mechanism is currently one of the most-debated topics in cryptocurrency. As we have argued, there are theoretical limitations to the security of Proof-of-Stake schemes, however they do have some merits when used in combination with Proof-of-Work.

In Nakamoto Proof-of-Work consensus, it can be said that “one CPU is one vote.” In Proof-of-Stake, it can be said that "one coin is one vote.” Distributing influence over coin holders arguably creates a wider and more liquid distribution for coinbase rewards than the mere paying of miners, who (as we have discussed) have incentive to cartelize in an attack scenario. Therefore, Proof-of-Stake may be an effective addition to Proof-of-Work systems if used to improve human consensus about network rules. However, it is not robust enough to be used alone.

Taking a step back, Proof-of-Work and Proof-of-Stake can be considered to exist at two different abstraction layers. Proof-of-Work is the layer that is closest to the bare metal, connecting hardware and physical resources to create distributed machine consensus. Proof-of-Stake may be useful for coordinating dynamic human behavior in such a system, once immutability of the underlying ledger and asset is guaranteed by Proof-of-Work.

An interesting architectural design is to use Proof-of-Work to produce blocks, and Proof-of-Stake to give full-node operators a voice in which blocks they collectively accept. These systems split the coinbase reward between miners and full-node validators instead of delivering 100 percent of rewards to miners. Stakeholders are incentivized to run full-nodes and vote on any changes miners want to make to the way they produce blocks.

The thinking goes like this: When compensated, full node operators can be trusted to act honestly, in order to collect the staking reward and increase the value of their coins; similarly, miners are incentivized to honestly produce blocks in order that their blocks are validated (not rejected) by stakers’ full nodes. In this way, networks with Proof-of-Work for base-layer machine consensus, and Proof-of-Stake for coinbase reward distribution and human consensus, can be said to be hybrid networks.

Such hybrid PoW/PoS architectures may prevent the network from descending into a delicate balance of terror (miner control) or into tyranny of structurelessness (developer control). These systems allow decisions about the rules of machine consensus to be taken by more than one group of stakeholders, instead of solely among core developers (as in traditional open allocation) or among large miners in a cartel.[232]


In this section, we have elucidated how computers on the Bitcoin network achieves decentralized and distributed consensus at a global scale. We’ve examined why Proof-of-Work is a critical enabler of machine consensus, and how Proof-of-Stake, while flawed, may be used in addition to Proof-of-Work to make human consensus (ie., project governance) more transparent and inclusive. In the next section, we will discuss the value of public cryptocurrency systems when stakeholders are held in a stable balance of power.

Section IVSection VI

Foot Notes

[188] Lee, Edward Ashford, Plato and the Nerd. MIT Press, 2017. https://iterative.tools/2pysrIr.

[189] Huoy, Nicolas. “The Bitcoin Mining Game,” 2016. https://iterative.tools/2pOMOkZ.

[190] LaurentMT. “Gravity,” August 27, 2018. https://iterative.tools/2xNNudL.

[191] Nakamoto, Satoshi. “Bitcoin: a peer to peer electronic cash system,” January 9, 2009. https://iterative.tools/2zsRYZC.

[192] Nguyen, Hugo. “The anatomy of proof-of-work,” February 10, 2018. https://iterative.tools/2OQ0AP8.

[193] Zhang, Shoucheng. “Blockchain Consensus Reduce Entropy of Individual but Enhance That of the Entire System,” January 23, 2018. https://iterative.tools/2pxpWpM.

[194] “Tally Sticks,” Wikipedia. https://iterative.tools/2y7fqKR.

[195] Wattenhofer, Roger. The Science of Blockchain. Self-published, 2016. https://iterative.tools/2pzK7U5.

[196] Bitmain Technology Holding. SEHK IPO prospectus, September 26, 2018. https://iterative.tools/2OkmNsn.

[197] Zhang, Leo. “The miner-trader disparity,” August 30, 2018. https://iterative.tools/2QXmyRx.

[198] Canaan Inc. SEHK Application Proof, May 15, 2018. https://iterative.tools/2yIwW7o.

[199] Ibid.

[200] Zahran, Mohamed, “Heterogeneous computing: here to stay,” March 2018, Communications of the ACM.

[201] Bitmex Research, “Unboxing Bitmain’s IPO (part 2)”, September 26, 2018. https://blog.bitmex.com/unboxing-bitmains-ipo/

[202] Jordan, Michael. “The evolution of bitcoin hardware,” September 2017. https://cseweb.ucsd.edu/~mbtaylor/papers/

[203] Ibid.

[204] Bitmain Technology Holding. SEHK IPO prospectus, September 26, 2018. https://iterative.tools/2OkmNsn.

[205] Kampl, Alex. “Analysis of large-scale Bitcoin mining operations,” 2014. https://iterative.tools/2MUUhrP.

[206] Open Compute Project. https://www.opencompute.org/

[207] Nakamoto, Satoshi, Bitcointalk Forum, August 7, 2018. https://bitcointalk.org/index.php?topic=721.20

[208] Hsue, Derek. “Is the war against ASIC worths fighting?,” April 4, 2018.

[209] Wanlin, Alexandre. “Why mining pool concentration is the Achilles’ heel of Bitcoin?” Hackernoon, February 19, 2018. https://hackernoon.com/why-mining-pool-concentration-is-the-achilles-heel-of-bitcoin-ce91089ce1f

[210] Zhang, Leo. “For small networks, the attacks are just beginning,” May 24, 2018. https://iterative.capital/2018/05/24/a-reflection-on-the-series-of-51-attacks/

[211] Wohlstetter, Albert. “The Delicate Balance of Terror,” RAND Corp., 1958. https://iterative.tools/2xAn1RO.

[212] Song, Jimmy. “Mining centralization scenarios,” April 10, 2018. https://iterative.tools/2yiLYRr.

[213] Vorick, David. “The state of cryptocurrency mining,” May 13, 2018. https://iterative.tools/2PagMe1.

[214] Russolillo, Steven. “Crypto meets Wall Street as Bitcoin mining giant Bitmain files for IPO”. Wall Street Journal, September 27, 2018. https://iterative.tools/2zVaI42.

[215] Zhang, Leo,”Slaying the dragon, one fork at a time,” August 15, 2018. https://iterative.tools/2OcZkt1.

[216] Murray, David. “Bitmain to begin shipping Equihash ASIC miners in June,” Block Explorer, May 5, 2018. https://iterative.tools/2N0rFxh.

[217] Zhang, Leo. “Inside Grin’s approach to resisting ASICs,” August 27, 2018. https://iterative.tools/2pEIC7o.

[218] Wirdum, Aaron. “Monero just hard forked - and it resulted in four new projects,” Bitcoin Magazine, April 6, 2018. https://iterative.tools/2Q34oN2.

[219] Zhang, Leo. “The warring states of cryptocurrency,” August 31, 2018. https://iterative.tools/2zU4m4S.

[220] Freeman, Jo. “The tyranny of structurelessness,” The Second Wave, V. 2, No. 1, 1972. https://iterative.tools/2N1aZpA.

[221] Mahmudov, Murad and Taché, Adam. “The many faces of Bitcoin,” Medium, April 10, 2018. https://iterative.tools/2O850Uy.

[222] “Ethereum Core Devs meeting Constantinople session #1.” YouTube, August 31, 2018. https://iterative.tools/2RQBjWX.

[223] Narayanan, page 169.

[224] Zhang, Leo, “Inside Grin’s approach to resisting ASICs” Aug 27, 2018. https://iterative.tools/2pEIC7o.

[225] The Economist, “Why bitcoin uses so much energy”, July 9, 2018. https://iterative.tools/2O6Afjs.

[226] Buterin, Vitalik. “On Settlement Finality,” May 9, 2016. https://iterative.tools/2O420sj.

[227] Polestra, Andrew. “On stake and consensus.”March 22, 2015. https://iterative.tools/2zUlJTf.

[228] Buterin, Vitalik, “Slasher: a punitive proof-of-stake algorithm,” Ethereum Blog, January 15, 2014. https://iterative.tools/2QxwUXs.

[229] Storzc, Paul. “Nothing is Cheaper than Proof of Work,” April 4, 2015, https://iterative.tools/2OcHYf2.

[230] Sehra, Avtar, “Blockchain and the Promise of the Perpetual Consensus Machine,” Medium, June 20, 2017. https://iterative.tools/2xSoKkC.

[231] Nguyen, Hugo. “Proof-of-Stake & the Wrong Engineering Mindset,” Medium, March 18, 2018. https://iterative.tools/2Q7NNbc.

[232] Red, Richard. “What is on-chain cryptocurrency governance? Is it plutocratic?,” June 20, 2018. https://iterative.tools/2O4YFKc.


Section VI

How Value Accrues In Proof-of-Work Networks

Considering the outcomes of Bitcoin’s incentive structure, and the levers that control them.

“Unix and C are the ultimate computer viruses.”

Richard P. Gabriel, 1983[233]

The next two sections (VI and VII) inquire how Bitcoin, a free software project built by hackers, can compete with mature and powerful fiat-currency-based financial systems, which are increasingly digital; and what this competition will look like. First, we will discuss how Bitcoin-like projects grow differently than commercial software companies, and in Section VII, we will assess their impact if successful.

What qualities cause cryptocurrency systems to grow in value?

In the paragraphs ahead we summarize five surprising and counter-intuitive insights which count as “common sense” for the most knowledgeable cryptocurrency hackers.

We have established that free, open source software, built in New Jersey style, has rapidly outstripped commercial competitors at the foundations of the Web. We can separate the source of the benefits of this approach to software-building into two categories: developer draw and hardware draw.

1. Developer Draw

Here we use the term “developer draw” to mean an open source project which is operationally healthy and attractive to developers who might contribute. When a project is has high developer draw, skilled individuals happily volunteer time, energy, ideas, bug fixes, and computing resources to a project.

Satoshi Nakamoto envisioned Bitcoin as a platform for private economic activity, maintained by loose groups of volunteers. Platforms are most useful when they are stable[234]. Stable platforms have few bugs and a clear use, making them an ideal platform for “entrepreneurial joiners,” a distinct type of economic actor who do not want to assume the risk of founding a new project, but will contribute to an existing project if it accrues them similar benefits[235]. A platform which is simple, stable, useful, and welcoming to new contributors will attract developers and joiners, as described in the aforementioned MIT study[236].

Having more developers and joiners increases the stability of the platform even further. The thesis that "given enough eyeballs, all bugs are shallow,” is known as Linus’s Law after the creator of Linux[237]. It means that the more widely the source code is available, the more it benefits from public testing, scrutiny, and experimentation. These activities result in stable software.

In a private company building proprietary code, the momentous task of debugging falls on the few developers that have access to the codebase. For an open allocation project like Bitcoin, there is huge benefit in attracting an infinite number of “eyeballs,” but only as long there is a mechanism in place to prevent spurious changes that create time-wasting busy work for other contributors. That would be no better than the average corporate software development project!

Bitcoin’s incentive system allows the best of both worlds. Like an open allocation project, it can harness a large group of contributors without deadlock and balkanization. Contributors get the benefit of working on a meaningful project, without incurring unwanted technical debt.

Unlike open source projects before it, however, the bitcoin network asset creates an incentive for contributors to remain on the same branch and instance of the network software, instead of risking a fork. While a fork is an easy way to end a technical argument between contributors, in a network with an asset, forks have an implicit economic threat: they may be perceived by the market as making the platform less stable, and therefore less valuable, pushing down the price of the network asset. Like a commercial company, Bitcoin’s organizational structure incentivizes contributors to work out their differences and keep the group intact, for everyone’s financial gain.

Thus, Bitcoin is the first free, non-commercial software project with the intensity of a commercial product. Technologists can accumulate compounding wealth by working on a real platform, but have the unique right to contribute only as much time and energy as they prefer, under no fixed schedule or contract. Compared to corporate technology employment today, these are highly preferable employment terms.

2. Hardware Draw

We use the term “hardware draw” as a general metric of machine accessibility. Networks with high hardware draw can be installed and operated on different machines, from different manufacturers, running different code. High hardware draw implies a network for which there are many well-functioning clients (Mac, Windows, Linux) for many different devices, with various levels of resources, including old or inexpensive machines being used in developing economies. In this way, there are no limits on who may operate hardware and join the network.

The concept of hardware draw has its roots in New Jersey style viral software, which prioritizes low resource use, so as to be compatible with many older or cheaper computers (emphasis added)[238]:

“The worse-is-better philosophy means that implementation simplicity has highest priority, which means Unix and C are easy to port on such machines. Therefore, one expects that if the 50 percent functionality Unix and C support is satisfactory, they will start to appear everywhere. And they have, haven’t they? Unix and C are the ultimate computer viruses.”

In Bitcoin, transactions contain small amounts of data, and its blockchain grows slowly. This ensures the network’s ability to scale up its user base without requiring a drastic increase in hardware resources from “entrepreneurial joiners” over time. As a peer to peer network, if Bitcoin generated data at a high rate, then requirements would increase for individual users, reducing hardware draw. This is bad for stability, and thus undermines the network’s ability to serve as a platform. Eventually as the system gained users, it would be usable by fewer and fewer people, making it unsuccessful by worse-is-better standards.

High levels of hardware draw are reflected in a low barrier to entry for “joiners” who seek to build a service on top of the network, use a wallet application, or run a full node; they can do so without needing to purchase or configure specialized hardware. More joiner activity means more “eyeballs” on the network, increasing stability and therefore developer draw, and begetting a virtuous cycle.

Conversely, a system which starts out with low hardware draw—requiring fast, expensive computers to run—may never reach an adequate population of users: [239]

“Once the virus has spread, there will be pressure to improve it, possibly by increasing its functionality closer to 90 percent, but users have already been conditioned to accept worse than the right thing. Therefore, the worse-is-better software first will gain acceptance, second will condition its users to expect less, and third will be improved to a point that is almost the right thing.”

Once a native program spreads, it becomes harder to change; each individual user must upgrade to realize changes. Furthermore, an over-reliance on upgrading the software later will result in technical debt, as some users fail to upgrade, and developers feel pressure to continue to support these old versions of the software.

Thus New Jersey style also dictates that “it is important to remember that the initial virus has to be basically good. If so, the viral spread is assured as long as it is portable.“ Comments from Nakamoto on June 17, 2010, imply that the challenge of Bitcoin was designing a network which would have high developer draw, and high hardware draw, but still achieve “functionality closer to 90 percent” of what people would want in a currency system right off the bat:

“The nature of Bitcoin is such that once version 0.1 was released, the core design was set in stone for the rest of its lifetime. Because of that, I wanted to design it to support every possible transaction type I could think of. The problem was, each thing required special support code and data fields whether it was used or not, and only covered one special case at a time. It would have been an explosion of special cases. The solution was script, which generalizes the problem so transacting parties can describe their transaction as a predicate that the node network evaluates. The nodes only need to understand the transaction to the extent of evaluating whether the sender’s conditions are met… Future versions can add templates for more transaction types and nodes running that version or higher will be able to receive them… The design supports a tremendous variety of possible transaction types that I designed years ago. Escrow transactions, bonded contracts, third party arbitration, multi-party signature, etc. If Bitcoin catches on in a big way, these are things we’ll want to explore in the future, but they all had to be designed at the beginning to make sure they would be possible later.”

This uncompromising (but somewhat extensible) design rationale makes Bitcoin viral and also useful to a broad base of potential users.[241]

Developer draw drives hardware draw

Hackers enjoy writing software, and will work on a network protocol before it is launched, and before its coins have any value. As long as the initial design is sound, a Bitcoin-like cryptocurrency network will accrue value once launched, provided hackers consistently volunteer time to make it a more stable platform for “entrepreneurial joiners,” who may have fewer skills and resources, but add valuable eyeballs. Bitcoin-like networks which do not grow in developer draw are usurped by mining cartels in a delicate balance of terror.

This means that in projects where developer draw is high, diverse contributors improve the underlying system, building and testing client applications on a broad base of hardware and software platforms. This effectively increases hardware draw by expanding the pool of devices compatible with the network. Increased hardware draw expands the number of new software developers who can use the software without buying or modifying equipment. This virtuous cycle begins with developer draw.

Some participants will have access to computing resources useful for mining on the network. Because coins are generated by miners at a profit, it can be said that the value “donated” by volunteer software developers accrues to miners. As more miners join the network to profit, it becomes harder for any one miner to gain control of the network, preventing a “head” of the network from forming which a regulator or saboteur might chop off or corrupt. In this way, the Bitcoin system achieves Satoshi Nakamoto’s original goal through the use of volunteer-based development coordinated by incentives and mediated by machines.

The enrichment of miners is a trade-off which is acceptable to the contributors only when they enjoy the contribution. If contributions are difficult or unpleasant, developer draw drops. Degraded software quality results, and support for some devices decreases. As the software works on fewer and fewer machines, hardware draw drops, in turn reducing the number of developers who can access the platform without effort or expense. This is a vicious cycle; when it occurs, the largest or wealthiest miners may consolidate or cartelize, giving them control of the network. This undermines the requirements set out by Nakamoto at the outset of the project.


In this section we have distilled the “common sense” benefits of Bitcoin’s incentive system. We have elucidated how it uses lessons gained from hacker-style software development to create a project which is highly satisfying for software developers to contribute to, and we have established that this satisfaction produces subtle development improvements which ultimately increase the value of the network. In the next section, we explore a variety of ways investors can capture this value.

Section VSection VII

Foot Notes

[233] “Worse is better,” Wikipedia. https://en.wikipedia.org/wiki/Worse_is_better.

[234] Eastwood, Brian. “What makes a product platform succeed,” MIT Sloan Newsroom, October 25, 2016, https://iterative.tools/2QWGq7q.

[235] Roach, Michael and Sauermann, Henry. “Founder or joiner?” Management Science, Vol. 61, No. 9, April 21, 2015. https://iterative.tools/2Q19Ksa.

[236] Lakhani and Wolf, 2005. https://iterative.tools/2NKeOQT.

[237] Gabriel, Richard. “The Rise of “Worse Is Better,” 1991, https://iterative.tools/2ONxFuV.

[238] Ibid.

[239] Ibid.

[240] Bitcointalk.com, June 17, 2010, https://iterative.tools/2N43uOJ.

[241] Ou, Elaine, “Bitcoin’s Anarchy Is a Feature, Not a Bug,” Bloomberg Opinion, 2018. https://iterative.tools/2pzupIE.


Section VII

The Investment Outlook

Hypothesizing about cultural and economic impacts at scale.

“The truth was, their revolution was not about an idea. It was about how you manage things.”

—BBC filmmaker Adam Curtis re: Occupy Wall Street.[242]

We have presented Bitcoin as an innovation in organization design. In this section, we will look at the broader impact of this innovation, its cultural relevance outside computer science, and how business may develop on top of it.

Cultural-historical timing is apt

Ward Cunningham is the engineer who coined the metaphor “technical debt,” and he draws a parallel between poor choices in software development and financial debt:[243]

“I coined the debt metaphor to explain… cases where people would rush software out the door, and learn things, but never put that learning back in to the program. That, by analogy, was borrowing money thinking you never had to pay it back. Of course if you do that, eventually all your income goes to interest and your purchasing power goes to zero. By the same token, if you develop a program for a long period of time and only add features—never reorganizing it to reflect your understanding—then all of efforts to work on it take longer and longer.”

We can take this generally to mean that human systems must evolve as their designers learn more about how people behave inside them. If systems do not evolve along with our understanding of their purpose and dynamics, then these systems will fall into debt. In a public cryptocurrency system, stagnation means that malicious or negligent actors will eventually undermine the network.

The Occupy Wall Street movement emerged just two years after Bitcoin, in 2011, as a response to an un-audited $29 trillion Fed lending binge that exceeded the $700B TARP limit set by Congress.[244] It can be said that OWS protested the origination of public debt by managers of the system.

Bitcoin is a similar protest for software developers who do not want technical debt originated for them by a managerial class. Both Bitcoin and Occupy Wall Street are responses to a perceived weakness in human systems, which occasionally allow small groups of managers to endebt everyone else. Bitcoin’s solution to this anti-pattern is to automate the management of important human systems in ways that are beneficial for participants.

Dilution of institutional boundaries may ensue

To understand the impact of Bitcoin, we return to Coase, and his theory that firms exist to reduce the transaction costs of specialists who collaborate in business. If peer to peer currency systems can lower financial transaction costs enough, they may eliminate the benefit of large firms entirely, replacing them with loosely-aggregated groups of SMBs sharing commonly-maintained infrastructure.

Coase writes that such a development would have massive societal impact, namely to subvert intellectual property laws and undermine the economics of large institutions:

"I showed in ‘The Nature of the Firm’ that, in the absence of transaction costs, there is no economic basis for the existence of the firm. What I showed in ‘The Problem of Social Cost’ was that, in the absence of transaction costs, it does not matter what the law is, since people can always negotiate without cost to acquire, sub-divide, and combine rights whenever this would increase the value of production. In such a world the institutions which make up the economic system have neither substance nor purpose. Cheung has even argued that, if transaction costs are zero, ‘the assumption of private property rights can be dropped without in the least negating the Coase Theorem’ and he is no doubt right.”[245]

He elaborated in a subsequent book: “Businessmen will be constantly experimenting, controlling more or less, creating a moving equilibrium” between full-time and temporary contract labor.[246] These impacts are also consistent with the stated goals of Satoshi Nakamoto and the Cypherpunks, whose resistance to institutional authority is rooted in a resentment for the managerial class and for the laws that protect and incentivize proprietary software.

Timothy May, the Intel executive and an original cypherpunk, predicted in 1992:

“Just as the technology of printing altered and reduced the power of medieval guilds and the social power structure, so too will cryptologic methods fundamentally alter the nature of corporations and of government interference in economic transactions. Combined with emerging information markets, crypto anarchy will create a liquid market for any and all material which can be put into words and pictures. And just as a seemingly minor invention like barbed wire made possible the fencing-off of vast ranches and farms, thus altering forever the concepts of land and property rights in the frontier West, so too will the seemingly minor discovery out of an arcane branch of mathematics come to be the wire clippers which dismantle the barbed wire around intellectual property.”[247]

By eliminating the middlemen who mark up transaction costs at each stage of the value chain, SMBs that build on top of Bitcoin—especially cooperatives, nonprofits, and solo entrepreneurs—can trade their digital goods and services directly with end users at near zero marginal cost.[248]

What cryptocurrency-based independent employment looks like

Individual entrepreneurs or small groups of developers can monetize free and open source projects in a number of ways. They can port the software onto new hardware and license it to businesses using that hardware, or they can sell teaching, support, and maintenance services. Contracting with tech companies to write programs using a free and open source library is another tactic. Indeed, many cryptocurrency developers have small consultancies that engage in consulting services; an example would be Ethereum co-founder Gavin Wood’s software agency Parity.

Older FOSS projects provide insights into the future of Bitcoin. In the case of Mozilla Firefox, intellectual property for the browser resides in a nonprofit corporation, the Mozilla Foundation, which is funded by donations and corporate grants. Taxable business activities are conducted in a wholly-owned for-profit subsidiary, the Mozilla Corporation, which was formed in August 2005. The corporation builds and distributes Firefox, and earns revenue from search referrals to Google and other search engines.[249] This “dual entity” structure, with a foundation and a corporation, has been mimicked in other open source projects, including Bitcoin, which is maintained by a group of developers known as “Bitcoin Core,” some of whom have formed a commercial entity called Blockstream, which builds enterprise applications on top of Bitcoin for profit.

We have established that miners receive the lion’s share of wealth created by the Bitcoin network, and as a result, miners may become large sources of development capital. Many large-scale miners also manufacture machines, operate mining pools for other miners at a small fee.

Bounty-hunting is another approach to software entrepreneurship. Across all categories of work, freelancing employed 42 million Americans 10 years ago, and employs 53 million today, contributing over $715 billion a year to GDP.[250] An increasing number of freelance platforms are offering work per-job, or in software terms, paying per problem solved.

Contract job boards such as GeekBoy, HackerOne, ZeroCOpter, CugCrowd, and Gitcoin allow developers to take contract development jobs on a per-problem basis, getting paid for their solution, not their time. Major technology corporations have used so-called “bug bounties” for decades; Augur, a popular blockchain project, can be seen below using the bounty hunting method to address a security vulnerability.


Figure 19. Augur is just one of many cryptocurrency projects using bounty-based employment to effect quality assurance.

Perhaps the best implementation of a bitcoin-based bounty hunting system is BitHub, created by cypherpunk and Signal Messenger creator Moxie Marlinspike. BitHub does two things for Signal Messenger, which is free and open source software:

  1. It allows Signal Messenger to take donations in bitcoin
  2. It pays out this bitcoin to developers who fix bugs

In this way, existing products and services can hire and retain high-quality engineering talent, on a completely pseudonymous basis, and totally ad hoc, simply by offering a Bitcoin payment. Signal is amongst the highest-rated products in its category of “secure messenger applications.” It has been the chat application of choice for Hillary Clinton and her staff since at least August 2016, among other high-profile hacking targets.[252] [253]

Figure 20. Bitcoiners are proud of their incentive system, which accomplished large-scale work without financing or incorporation. Bitcoin is often characterized as a “honeybadger” for its rugged characteristics.
(Source: Twitter.[254])

Investigating the altcoin business model

Because Bitcoin develops slowly in the “bazaar,” and has no marketing department, it can appear from the outside fairly chaotic, and by all appearances “worse” than privately-developed alternatives. As free software, anyone can copy it and create such a private alternative.

Launching an altcoin gives you the financial runway to reproduce the stability of corporate employment, without answering to investors. (Just miners and users!) What is the distinction?

In a cryptocurrency context, a “scam” is a project which:

  1. Will not grow or retain its developer pool, forestalling any chance at viral growth or stability.
  2. Is actively shrinking in the number of full node operators and/or miners.
  3. Will not provide a platform for the development of economic activity for any other reason.

Not all network operators are intentional scammers. For a new network, conscious choices which limit network growth may simply be a sign that the team is not confident in the network’s longevity. Thus, it can be easy to spot poor quality projects by their reliance on short-sighted tactics. While there is no firm litmus test for the viability of a project, the following characteristics can be considered red flags:

  • The network is operated primarily by one incorporated entity.
  • Any component of its software is proprietary.
  • Decisions about code commits are closed to outside contributors.
  • Development process is private; only insiders know how decisions are made.
  • The project is free and open source, but multiple implementations are politically unviable.

Obstacles to altcoin competition

Bitcoin is a complex codebase which contains 12 years of brilliant engineering. Starting from scratch means re-encountering many of the same problems all over again; forking and attempting to work on an unfamiliar code base can mean endless frustration, as one learns its peculiarities. The biggest challenge to competing with Bitcoin is catching up to thousands of hours of contributions it has received.

Accelerating past the normal pace of open allocation requires some new tricks, because the usual speed-ups—raising money, paying fat salaries, and central planning often end up reducing developer draw and hardware draw, not increasing it.

DACs, or decentralized autonomous companies, are an attempt at overcoming this problem using the usual corporate carrots—resource planning, a salary and stable employment—but without the dreaded human managers. This may enable project velocity to increase without the introduction of undesirable qualities, but the efficacy of this approach remains to be seen.

DAC-operated cryptocurrency networks are interesting to the extent that they fulfill the following requirements:

  1. Are similar to Bitcoin in architecture, with Proof-of-Work securing the base layer.
  2. Coins are exchangeable for Bitcoin without a trusted central party in an “atomic swap.” [255]
  3. Is resistant to fork attacks from large ASIC miners, with plenty of hashrate or fork-resistant mechanisms.[256]
  4. Can use hybrid PoW/PoS to improve the fairness of human consensus.
  5. Have some mechanism by which the contributor base may scale to the point where development velocity exceed Bitcoin’s.
  6. Isn’t controlled by a dictator, which reduces the fun and freedom of open allocation, killing developer draw.
  7. Has a talented team which can attract a lot of engineers and excitement to test limits of team scale.

Where value accumulates for investors

Who benefits from the forces at work in public cryptocurrency networks? The following points represent outstanding opportunities for capital.

  • Given: Bitcoin is a self-organizing infrastructure project which provides flexible employment and intellectual stimulation for technologists.
  • Insight: For these reasons, bitcoins themselves are valued collectibles within the technologist demographic, which is a critical and growing segment of the workforce. As infrastructure improves, perceived value increases.
  • Given: Owing to Bitcoin’s 10-year head start and brilliant contributor base, its development will out-pace all but a few exceptionally competent projects. The few projects which survive will do so by innovating on top Bitcoin’s incentive model to speed development velocity without introducing technical debt, “catching up” with Bitcoin in functionality and network security.
  • Insight: Over time, the entire value of the asset class will collapse into a select handful of undervalued cryptocurrencies, which have used DAC or hybrid consensus governance to increase project velocity to the point of competitiveness with Bitcoin.
  • Given: In a large and secure cryptocurrency network, miners are equivalent to Galbraith’s shareholders: “irrelevant fixtures” to its development, but owners nonetheless.
  • Insight: Mining OEMs, large-scale mine operators, and mining-related service providers will accumulate the vast majority of wealth created by Bitcoin and other cryptocurrency networks during the issuance period, despite expending far fewer human resources than the software developers who volunteer contributions.
  • Given: The speed, cheap costs of operation, and settlement finality characteristic of “layer 2” point-to-point networks, built on top of base-layer cryptocurrencies, will make them ideal for retail and e-commerce payments as competitors to Visa, Mastercard, and Paypal. (Lightning Network has been well-explained already by others.[257] [258] [259])
  • Insight: There will be many competing L2 networks built by both FOSS groups (such as Lightning) and private commercial interests (such as ICE). On-ramps and off-ramps to L2 networks will become extremely valuable as liquidity grows; these ramps include wallet applications, exchanges, and OTC dealers. Secondarily, these ramps will serve as natural portals for e-commerce activity.
  • Given: As cryptocurrency transaction volume increases, major platforms like Apple iTunes and Google Play will continue to block cryptocurrency apps and digital collectibles from their devices, protecting their in-app Apple Pay and Google Pay purchasing frameworks, which developers on those platforms are required to use to sell digital goods.[260] This payment-framework apartheid will create demand for third party privacy smartphones running Linux-based, GNU, or BSD operating systems, and which natively run cryptocurrency protocols. (Already, at least one such distribution has appeared.[261])
  • Insight: After ASIC miners, smartphones will be the second most valuable category of cryptocurrency-specific devices whose prices are denominated in cryptocurrency. These devices will become highly-valued distribution and aggregation points for products and services offered by “entrepreneurial joiners” who integrate with, and build atop, Bitcoin and other networks.
  • Given: Forced to compete with free software developed by large self-organizing masses of volunteers, and gaining nothing but unnecessary costs from their strict full-time hierarchy, major SAAS companies will suffer financially, forcing consolidation and layoffs. Many of these companies will launch competing “blockchain” based systems, but they will be too expensive and insecure for practical use. This may cause unexpected frustration for large software companies.
  • Insight: We expect a private equity boom in the early 2020s, in which tokenized debt financing is used to finance a wave of hostile bust-up takeovers, unbundling large public technology companies, laying off elements of their technostructure, and reorganizing their teams to function autonomously on an open allocation basis. New digital financial products will be issued which entitle investors to streams of income from individual teams, products, or services within the formerly-unified company. In this way, public stocks will become baskets of “atomic equities” that represent the performance of each constituent unit in a given value chain; divisions between corporate entities and jurisdictions will cease to be relevant factors in the issuance of public and private securities. This activity will be pioneered by engineer-led investment groups, not incumbent underwriters, who will not be able to retain the necessary engineering talent to undertake such activities.

The passing of the last insight would amount to the disintermediation of big capital, an idea that has been discussed favorably on social media, as expressed in Figure 21 below.

Figure 21. Antipathy for corporate managers and venture capitalists runs deep in the Bitcoin community.
(Source: Twitter[262])

Things investors should generally avoid

  • Initial coin offerings (ICOs). As we have discussed, cryptocurrency projects only qualify as good platforms for business if they earn volunteer contributions. Pre-minting tokens and selling them to “investors,” with a rich stash held back for the “team,” creates strong incentives for technical debt and command-and-control management which eventually drives out the best talent, crushing the utility of the network and the price of the coin.
  • ICO advisors and diversified ICO coin “funds.” The market leader, Bitcoin, has exhibited extreme virality amongst software developers, miners, and retail investors. It has strong network effects. Very few projects will survive alongside Bitcoin, and they will be successful for reasons recounted in this essay—reasons that most ICO launchers would find surprising and counter-intuitive. Over-diversification will kill most cryptocurrency investment funds, who will miss out on market beta by holding too little bitcoin.
  • Venture-backed cryptocurrencies and private blockchains. The 10-year investment horizon for venture capital funds limits long-term thinking, because companies are forced to dazzle investors each time they recapitalize. This “fundraising treadmill” feeds marketing narratives and “wow” features that generate technical debt. As we’ve learned, such systems cannot compete with the costs of open allocation non-commercial projects.

Categorizing coins for investment

In this paper we have discussed the context and origins of hacker culture, the free software movement, cypherpunks, and the currency system Bitcoin which is characteristic of these origins. We believe there are a substantial number of people who value Bitcoin strongly for the reasons mentioned.

Which coins are also valuable? Developing criteria from the narrative above is fairly straightforward. To someone who values Bitcoin, altcoins are valuable if it they meet the criteria in Section VI, but with alternative techniques. Coins become less valuable as they adhere more towards traditional, hierarchical, corporate software development processes. Here is how we categorize coins:


Figure 22. Plotting the investability of cryptocurrency projects based on organizational design. Bitcoin and Ethereum can be said to be in the lower-right and lower-left quadrants respectively. Private chains such as Ripple would appear in the upper-left. Hybrid PoW/PoS chains are in the upper-right hand corner.

  1. The top-left quadrant: A non-starter for investors; it is pure speculation on corporate-style projects which will inevitably rank lower in developer draw and higher in transaction costs, with more bugs and less stability than FOSS permissionless blockchains.
  2. The lower-right quadrant: Bitcoin appears here, along with similar open allocation FOSS forks of Bitcoin. While the fork may begin with one developer, others quickly join if they see differentiation characteristics in the new fork.
  3. The lower-left quadrant: Reflects the reality of many FOSS permissionless blockchains, which may have begun life in the lower-right quadrant. Ethereum seems to be migrating from the lower-right to the lower-left. These quadrants are generally investible, but the migration towards the lower-left is considered to be a negative attribute for a permissionless chain.
  4. The top-right quadrant: Meaningful attempts at innovation on top of Bitcoin are here, in the form of accelerating project velocity with automated governance, and without introducing the flaws of centralization (namely, technical debt and lack of open allocation). Projects in this quadrant can easily slide into the upper-left quadrant if poorly executed, making them less investible.

Conclusion: what is driving the cryptocurrency phenomenon?

Bitcoin has been largely characterized as a digital currency system built in protest to Central Banking. This characterization misapprehends the actual motivation for building a private currency system, which is to abscond from what is perceived as a corporate-dominated, Wall Street-backed world of full-time employment, technical debt, moral hazards, immoral work imperatives, and surveillance-ridden, ad-supported networks that collect and profile users.

To developers, adoption of Bitcoin and cryptocurrency symbolizes an exit (or partial exit) of the corporate-financial employment system in favor of open allocation work, done on a peer-to-peer basis, in exchange for a currency that is anticipated to increase in value.

Freelancing and solo entrepreneurship are already popular in Silicon Valley and amongst Millennial and Gen-X workers because these lifestyles afford them self-directed, voluntary work. Highly-skilled technology workers are already fed up with big tech, the drive for profit, and the spectre of technical debt. The leverage is increasingly on the side of the individual engineers; this is why the Uber executive quoted in the Preface fears the company may be “fucked” if it “can’t hire any good engineers.”

This “exiting” of the mainstream employment system is why some members of the investor class may intuit Bitcoin as a threat:

  1. If technologists exit the corporate-financial system en masse, the reduction in available technical labor would stymie the technical development of public companies, banks, and governments, whose services are increasingly digital.
  2. If technologists build a cheap, private, and reliable “alternative financial system,” and if such a system cannot be regulated or taxed out of existence, then business activity will flow naturally into such a system to realize lower transaction costs. This draws value out of existing forex, equities, real assets, crushing the margins of existing financial services.

We believe these points provide critical insight into Warren Buffett’s classification of Bitcoin as “rat poison,” which is similar in tone to the reaction of Steve Ballmer to Linux, when he characterized it as a “cancer” that would destroy the Windows OS. To the administrators of expensive, proprietary monopolies, free and open source systems are deadly.

Charlie Munger’s assertion that cryptocurrencies are “turds,” also quoted in the Preface, is a more nuanced and less threatened reaction than his business partner’s. Cryptocurrency appears to be a “worse” currency system than the existing system, but it’s also clear that this “worse” substitute is interesting to young people; it simply confounds Munger that “worse is better” when a financial system is built in software instead of paper. He has probably never developed software, or encountered New Jersey Style, but that’s no fault of his.


For the last 50 years, technologists have been motivated to create a culture of software development that exists outside institutional boundaries. Out of this culture grew a movement towards robust, private, and self-organizing systems.

This vision is embodied in Bitcoin, which lays the groundwork for ways of working in information technology businesses, without a bureaucracy. Given what we know about the moral quality of the Cypherpunks’ struggle against institutional oversight, it’s easy to see why a sense of righteousness might be on display in the most fervent Bitcoin advocacy groups. In short, William Shatner got it right with his assessment in 2014:


Figure 23. If hackers can be deemed programming snobs (and we think they can) then William Shatner’s assessment of Bitcoin in 2014 was highly perceptive.
(Source: Twitter[263])

Far from being a novelty or prototype, Bitcoin has shown itself to be a threatening alternative to present-day organizational conventions and to the large commercial businesses that rely on them. It may spur a radical unbundling of corporate business as it lowers transaction costs for the institutions that adopt it. While the effects of such unbundling are unpredictable, value seems most likely to accumulate in cryptocurrency services businesses; in hardware makers and operators that rent computing resources to the network; and in building businesses on the layer 2 networks.

In the final part of this essay, we have looked at the potential impact of Bitcoin’s success, and expectations about its price. We’ve examined why most altcoins are doomed and we have provided guidance on investments to avoid, and hypothesized where value will accumulate for savvy allocators.

To learn more about specific investment strategies in the cryptocurrency asset class, or to view industry news through our investment lens, please sign up for the Iterative Capital newsletter

Section VIAppendix

Foot Notes

[243] Ward Cunningham, “Debt Metaphor.” YouTube. February 14, 2009. https://iterative.tools/2xMP3cb.

[244] Carney, John. “The Size of the Bank Bailout: $29 Trillion,” CNBC, December 14, 2011. https://iterative.tools/2OL7epM.

[245] Coase, R. H. The Firm, the Market, and the Law. National Bureau of Economics, 1987, page 7. https://iterative.tools/2zrqIdJ.

[246] Coase, R. H., “The Nature of the Firm,” Economica, 1937, New Series, Vol. 4, No. 16, November 1937, pp. 386-405. https://iterative.tools/2xRjr4Y.

[247] May, Timothy, “The Crypto Anarchist Manifesto,” Satoshi Nakamoto Institute, 1992, https://iterative.tools/2OWlR9G.

[248] Rifkin, Jeremy. The Zero Marginal Cost Society. Palgrave Macmillan, 2014. Page 43. https://iterative.tools/2QTQIoR.

[249] Hoyt, David; Sutton, Robert; Rao, Hayagreeva. “Mozilla: Scaling through a community of volunteers,” December, 2009. https://iterative.tools/2MYOMs4.

[250] “Freelancing in America,” Elance, https://iterative.tools/2LT4LaJ.

[251] AugurProject, Twitter, July 2, 2018. https://iterative.tools/2ORQEo7.

[252] “BitHub = Bitcoin + GitHub. An experiment in funding privacy OSS,” Signal Blog, December 16, 2013. https://signal.org/blog/bithub/

[253] Roberts, Jeff John. “Hillary Clinton’s Campaign Uses This Messaging App to Foil Hackers,” Fortune, August 29, 2016. https://iterative.tools/2QVh7ml.

[254] Whalepanda, Twitter, July 16, 2018. https://iterative.tools/2xOCQDQ.

[255] “Atomic cross chain trading,” Bitcoin Wiki, https://iterative.tools/2ztsATe.

[256] “Detailed Analysis of Decred Fork Resistance,” Reddit, https://iterative.tools/2QlmcnK.

[257] “How does the Lightning network work in simple terms?” Bitcoin Stack Exchange, April 12, 2016. https://iterative.tools/2oAW1wr.

[258] Poon, Joseph, and Dryja, Thaddeus. “The Bitcoin Lightning Network: Scalable Offchain Instant Payments,” January 14, 2016. https://iterative.tools/2DuIsch.

[259] Van Wirdum, Aaron. “Understanding the Lightning Network, Part 1: Building a Bidirectional Bitcoin Payment Channel,” May 31, 2106. https://iterative.tools/2NeAkQT.

[260] Dale, Brady. “Apple Abruptly Orders Coinbase Wallet to Remove Crypto Collectible,” August 31, 2018. https://iterative.tools/2OeJ6ie.

[261] Braiins Systems. “Introducing Braiins OS, open-source system for cryptocurrency devices,” Medium, September 23, 2018. https://iterative.tools/2NHpi7R.

[262] @Beautyon_. Twitter, July 16, 2018. https://twitter.com/beautyon_/status/1018835494094950400?s=21

[263] Shatner, William (@WilliamShatner). Twitter, January 2, 2014. https://iterative.tools/2Q8zmUd.



Popular Conceptions About Price Trends

Is there more at work than self-fulfilling prophecy?

Historically, Bitcoin and other cryptocurrencies experience periods of rapid appreciation after challenging social, engineering, or regulatory hurdles have been cleared. This has been the case for the UASF soft fork of 2017, various technical integrations, and the launch of CBOE futures. Time-based milestones also bring investors into the market, perhaps as a result of the Lindy effect.[264]

As a result, Bitcoin’s “halving” events, in which the emission rate of Bitcoin paid to miners is reduced automatically by the network in periodic intervals, produce price appreciation as well. Accumulation has held relatively constant for 9 years, but volatility is characteristic, and it is unknown how the market will react once the emission period has ended and all 21 million bitcoin are in circulation. In this appendix, we discuss the levers which are widely perceived to move spot prices.

Factors driving retail speculation

Privacy concerns have become mainstream since proof of government spying was revealed in the U.S. by Edward Snowden in 2013.[265] The number of Internet users and tech workers is growing, and people are concerned about who may view their data. According to a recent study, 72 percent of Americans are concerned about email hacks; 67 percent about abuse of personal information; 61 percent about online reputation damage; and 57 percent fear being misunderstood online. [266]

Hollywood may be helping feed the online paranoia. The struggle of technologists against bureaucratic management has turned into a cultural trope. Cypherpunk culture has benefited from the mainstreaming of its stories and concepts with films (and remakes) like “Tron,” which extends the ideas about cyberspace pioneered by dystopian cypherpunk novelist William Gibson.

In his 1984 story “Neuromancer,” Gibson reveals the concept of “the Matrix,” a place where human memory and perception is mechanized in a virtual reality system.[267] This film too has cultivated paranoia about the use of monotechnic megamachines to achieve unethical and immoral ends.

Popular conceptions about price

Portfolio managers generally combine fundamental analysis and technical analysis when assessing equities. As we have discussed, “fundamental analysis” for cryptocurrency investors is a matter of evaluating developer draw and hardware draw. But because bitcoin trades like any other commodity, it is worth addressing the way market participants generally approach bitcoin price and trading.

Figure 24: Prediction through August 2019.
(Credit: Fujibear on TradingView)

The “Price Channel” Theory

Traders generally adhere to a few ideas about the trend in Bitcoin’s price, which may or may not be self-fulfilling:

  • That its shapes are repeating “fractals.”
  • That it exhibits Wyckoff Market Cycles. [268]
  • That its deflationary emission rate causes regular price increases, particularly acutely in response to halving events.
  • That its value will generally increase over time.

In this section we explore are a variety of charts which depict commonly-circulated ideas about future trends. We do not endorse these predictions but present them as anecdotal evidence of views within communities of traders.

Figure 25: Halving and price, up to present day.
(Credit: BusinessInsider)

The Halving Theory

Many traders believe that price action is driven by Bitcoin’s automated and periodic “halving” of the coinbase reward paid to miners for finding blocks. The halvings are the reason that bitcoin is said to be a deflationary currency. Every few years, the network automatically adjusts, based on predetermined variables, to paying miners exactly half of the block reward they received previously.

Miners are notorious for holding back their rewards, perhaps in an effort to contribute to illiquidity and drive up prices. Presumably, they must liquidate some rewards periodically to reduce risk. The price threshold at which miners are willing to liquidate to reduce risk may increase dramatically after halvings, which may or may not produce the price effect demonstrated in the chart above. Many versions of this chart circulate. Below, another halving-and-price chart:

Figure 26. Color-coded chart showing the distance between halvings relative to Bitcoin price.
(Credit: @100Trillion on Twitter [269])

The Hype Cycle Theory

Bitcoin’s promise as a self-organizing micro-economy is not well understood by the retail public, but its promises are routinely co-opted and oversold by charlatans looking to cash in on Bitcoin’s technical narrative.

Figure 27. 2011-2014 “hype cycle” price chart, based upon Gartner’s Hype Cycle.
(Credit: Wikimedia)

Some traders believe that, as Bitcoin makes increasing progress in terms of its reliability, liquidity, and general efficacy, it creates new opportunities for charlatans to sell an increasingly obvious future to non-technical investors. Above, a chart showing the way price coincides with periods of media hype, highlighted in blue.

The Hashrate theory

Retail cryptocurrency investors tend to assume that miners join a network when it is profitable to mine, but there may be some evidence that the relationship between network hashrate and price may work in an opposite way.[270] Vitalik Buterin of the Ethereum project has built a series of hashrate-price estimators that attempt to measure Bitcoin price endogenously.[271]

Figure 28. Bitcoin price charted against hashrate, 2010-2014.
(Credit: HashingIt.com)

This counter-intuitive relationship may be more rational than it appears; when a network is new, the network token is nearly valueless. Yet if the development team and the code shows potential, miners may contribute hashrate to the network on a speculative basis, before the coin is even listed to trade on exchanges. The growth of the Bitcoin hashrate despite downward price pressure seems to validate the hypothesis that miners mine in anticipation of future value, not in order to liquidate rewards right away.[272]

The Fractal Theory

Other more superstitious traders seem to believe that Bitcoin price patterns recur in fractal patterns, along various intervals.

Figure 29: Comparison between 2014 (left) and 2018 (right).
(Credit: TradingView).


In this section we’ve sampled some of the theories behind Bitcoin price action. While miners control liquidity of newly-minted coins, large swaths are also held by speculative holders, many of whom profess undying commitment to long positions. While there is reason to be believe the Bitcoin network will grow in value over time, it’s impossible to say whether the recent mania experienced in 2017 was a unique event, or the continuation of a larger and longer trend.


Great read! :ok_hand: