bannerbanner
Zucked: How Users Got Used and What We Can Do About It
Zucked: How Users Got Used and What We Can Do About It

Полная версия

Zucked: How Users Got Used and What We Can Do About It

Язык: Английский
Год издания: 2019
Добавлена:
Настройки чтения
Размер шрифта
Высота строк
Поля
На страницу:
4 из 7

Unlike the vertical integration of mainframes and minicomputers, which limited product improvement to the rate of change of the slowest evolving part in the system, the horizontal integration of PCs allowed innovation at the pace of the most rapidly improving parts in the system. Because there were multiple, competing vendors for each component, systems could evolve far more rapidly than equivalent products subject to vertical integration. The downside was that PCs assembled this way lacked the tight integration of mainframes and minicomputers. This created a downstream cost in terms of training and maintenance, but that was not reflected in the purchase price and did not trouble customers. Even IBM took notice.

When IBM decided to enter the PC market, it abandoned vertical integration and partnered with a range of third-party vendors, including Microsoft for the operating system and Intel for the microprocessor. The first IBM PC shipped in 1981, signaling a fundamental change in the tech industry that only became obvious a couple of years later, when Microsoft’s and Intel’s other customers started to compete with IBM. Eventually, Compaq, Hewlett-Packard, Dell, and others left IBM in the dust. In the long run, though, most of the profits in the PC industry went to Microsoft and Intel, whose control of the brains and heart of the device and willingness to cooperate forced the rest of the industry into a commodity business.

ARPANET had evolved to become a backbone for regional networks of universities and the military. PCs continued the trend of smaller, cheaper computers, but it took nearly a decade after the introduction of the Apple II before technology emerged to leverage the potential of clusters of PCs. Local area networks (LANs) got their start in the late eighties as a way to share expensive laser printers. Once installed, LANs attracted developers, leading to new applications, such as electronic mail. Business productivity and engineering applications created incentives to interconnect LANs within buildings and then tie them all together over proprietary wide area networks (WANs) and then the internet. The benefits of connectivity overwhelmed the frustration of incredibly slow networks, setting the stage for steady improvement. It also created a virtuous cycle, as PC technology could be used to design and build better components, increasing the performance of new PCs that could be used to design and build even better components.

Consumers who wanted a PC in the eighties and early nineties had to buy one created to meet the needs of business. For consumers, PCs were relatively expensive and hard to use, but millions bought and learned to operate them. They put up with character-mode interfaces until Macintosh and then Windows finally delivered graphical interfaces that did not, well, totally suck. In the early nineties, consumer-centric PCs optimized for video games came to market.

The virtuous cycle of Moore’s Law for computers and Metcalfe’s Law for networks reached a new level in the late eighties, but the open internet did not take off right away. It required enhancements. The English researcher Tim Berners-Lee delivered the goods when he invented the World Wide Web in 1989 and the first web browser in 1991, but even those innovations were not enough to push the internet into the mainstream. That happened when a computer science student by the name of Marc Andreessen created the Mosaic browser in 1993. Within a year, startups like Yahoo and Amazon had come along, followed in 1995 by eBay, and the web that we now know had come to life.

By the mid-nineties, the wireless network evolved to a point that enabled widespread adoption of cell phones and alphanumeric pagers. The big applications were phone calls and email, then text messaging. The consumer era had begun. The business era had lasted nearly twenty years—from 1975 to 1995—but no business complained when it ended. Technology aimed at consumers was cheaper and somewhat easier to use, exactly what businesses preferred. It also rewarded a dimension that had not mattered to business: style. It took a few years for any vendor to get the formula right.

The World Wide Web in the mid-nineties was a beautiful thing. Idealism and utopian dreams pervaded the industry. The prevailing view was that the internet and World Wide Web would make the world more democratic, more fair, and more free. One of the web’s best features was an architecture that inherently delivered net neutrality: every site was equal. In that first generation, everything on the web revolved around pages, every one of which had the same privileges and opportunities. Unfortunately, the pioneers of the internet made omissions that would later haunt us all. The one that mattered most was the choice not to require real identity. They never imagined that anonymity would lead to problems as the web grew.

Time would expose the naïveté of the utopian view of the internet, but at the time, most participants bought into that dream. Journalist Jenna Wortham described it this way: “The web’s earliest architects and pioneers fought for their vision of freedom on the Internet at a time when it was still small forums for conversation and text-based gaming. They thought the web could be adequately governed by its users without their need to empower anyone to police it.” They ignored early signs of trouble, such as toxic interchanges on message boards and in comments sections, which they interpreted as growing pains, because the potential for good appeared to be unlimited. No company had to pay the cost of creating the internet, which in theory enabled anyone to have a website. But most people needed tools for building websites, applications servers and the like. Into the breach stepped the “open source” community, a distributed network of programmers who collaborated on projects that created the infrastructure of the internet. Andreessen came out of that community. Open source had great advantages, most notably that its products delivered excellent functionality, evolved rapidly, and were free. Unfortunately, there was one serious problem with the web and open source products: the tools were not convenient or easy to use. The volunteers of the open source community had one motivation: to build the open web. Their focus was on performance and functionality, not convenience or ease of use. That worked well for the infrastructure at the heart of the internet, but not so much for consumer-facing applications.

The World Wide Web took off in 1994, driven by the Mosaic/Netscape browser and sites like Amazon, Yahoo, and eBay. Businesses embraced the web, recognizing its potential as a better way to communicate with other businesses and consumers. This change made the World Wide Web geometrically more valuable, just as Metcalfe’s Law predicted. The web dominated culture in the late nineties, enabling a stock market bubble and ensuring near-universal adoption. The dot-com crash that began in early 2000 left deep scars, but the web continued to grow. In this second phase of the web, Google emerged as the most important player, organizing and displaying what appeared to be all the world’s information. Apple broke the code on tech style—their products were a personal statement—and rode the consumer wave to a second life. Products like the iMac and iPod, and later the iPhone and iPad, restored Apple to its former glory and then some. At this writing, Apple is the most valuable company in the world. (Fortunately, Apple is also the industry leader in protecting user privacy, but I will get to that later.)

In the early years of the new millennium, a game changing model challenged the page-centric architecture of the World Wide Web. Called Web 2.0, the new architecture revolved around people. The pioneers of Web 2.0 included people like Mark Pincus, who later founded Zynga; Reid Hoffman, the founder of LinkedIn; and Sean Parker, who had cofounded the music file sharing company Napster. After Napster, Parker launched a startup called Plaxo, which put address books in the cloud. It grew by spamming every name in every address book to generate new users, an idea that would be copied widely by social media platforms that launched thereafter. In the same period, Google had a brilliant insight: it saw a way to take control of a huge slice of the open internet. No one owned open source tools, so there was no financial incentive to make them attractive for consumers. They were designed by engineers, for engineers, which could be frustrating to non-engineers.

Google saw an opportunity to exploit the frustration of consumers and some business users. Google made a list of the most important things people did on the web, including searches, browsing, and email. In those days, most users were forced to employ a mix of open source and proprietary tools from a range of vendors. Most of the products did not work together particularly well, creating a friction Google could exploit. Beginning with Gmail in 2004, Google created or acquired compelling products in maps, photos, videos, and productivity applications. Everything was free, so there were no barriers to customer adoption. Everything worked together. Every app gathered data that Google could exploit. Customers loved the Google apps. Collectively, the Google family of apps replaced a huge portion of the open World Wide Web. It was as though Google had unilaterally put a fence around half of a public park and then started commercializing it.

The steady march of technology in the half century prior to 2000 produced so much value—and so many delightful surprises—that the industry and customers began to take positive outcomes for granted. Technology optimism was not equivalent to the law of gravity, but engineers, entrepreneurs, and investors believed that everything they did made the world a better place. Most participants bought into some form of the internet utopia. What we did not realize at the time was that the limits imposed by not having enough processing power, memory, storage, and network bandwidth had acted as a governor, limiting the damage from mistakes to a relatively small number of customers. Because the industry had done so much good in the past, we all believed that everything it would create in the future would also be good. It was not a crazy assumption, but it was a lazy one that would breed hubris.

When Zuck launched Facebook in early 2004, the tech industry had begun to emerge from the downturn caused by the dot-com meltdown. Web 2.0 was in its early stages, with no clear winners. For Silicon Valley, it was a time of transformation, with major change taking place in four arenas: startups, philosophy, economics, and culture. Collectively, these changes triggered unprecedented growth and wealth creation. Once the gravy train started, no one wanted to get off. When fortunes can be made overnight, few people pause to ask questions or consider side effects.

The first big Silicon Valley change related to the economics of startups. Hurdles that had long plagued new companies evaporated. Engineers could build world-class products quickly, thanks to the trove of complementary software components, like the Apache server and the Mozilla browser, from the open source community. With open source stacks as a foundation, engineers could focus all their effort on the valuable functionality of their app, rather than building infrastructure from the ground up. This saved time and money. In parallel, a new concept emerged—the cloud—and the industry embraced the notion of centralization of shared resources. The cloud is like Uber for data—customers don’t need to own their own data center or storage if a service provides it seamlessly from the cloud. Today’s leader in cloud services, Amazon Web Services (AWS), leveraged Amazon.com’s retail business to create a massive cloud infrastructure that it offered on a turnkey basis to startups and corporate customers. By enabling companies to outsource their hardware and network infrastructure, paying a monthly fee instead of the purchase price of an entire system, services like AWS lowered the cost of creating new businesses and shortened the time to market. Startups could mix and match free open source applications to create their software infrastructure. Updates were made once, in the cloud, and then downloaded by users, eliminating what had previously been a very costly and time-consuming process of upgrading individual PCs and servers. This freed startups to focus on their real value added, the application that sat on top of the stack. Netflix, Box, Dropbox, Slack, and many other businesses were built on this model.

Thus began the “lean startup” model. Without the huge expense and operational burden of creating a full tech infrastructure, new companies did not have to aim for perfection when they launched a new product, which had been Silicon Valley’s primary model to that point. For a fraction of the cost, they could create a minimum viable product (MVP), launch it, and see what happened. The lean startup model could work anywhere, but it worked best with cloud software, which could be updated as often as necessary. The first major industry created with the new model was social media, the Web 2.0 startups that were building networks of people rather than pages. Every day after launch, founders would study the data and tweak the product in response to customer feedback. In the lean startup philosophy, the product is never finished. It can always be improved. No matter how rapidly a startup grew, AWS could handle the load, as it demonstrated in supporting the phenomenal growth of Netflix. What in earlier generations would have required an army of experienced engineers could now be accomplished by relatively inexperienced engineers with an email to AWS. Infrastructure that used to require a huge capital investment could now be leased on a monthly basis. If the product did not take off, the cost of failure was negligible, particularly in comparison to the years before 2000. If the product found a market, the founders had alternatives. They could raise venture capital on favorable terms, hire a bigger team, improve the product, and spend to acquire more users. Or they could do what the founders of Instagram and WhatsApp would eventually do: sell out for billions with only a handful of employees.

Facebook’s motto—“Move fast and break things”—embodies the lean startup philosophy. Forget strategy. Pull together a few friends, make a product you like, and try it in the market. Make mistakes, fix them, repeat. For venture investors, the lean startup model was a godsend. It allowed venture capitalists to identify losers and kill them before they burned through much cash. Winners were so valuable that a fund needed only one to provide a great return.

When hardware and networks act as limiters, software must be elegant. Engineers sacrifice frills to maximize performance. The no-frills design of Google’s search bar made a huge difference in the early days, providing a competitive advantage relative to Excite, Altavista, and Yahoo. A decade earlier, Microsoft’s early versions of Windows failed in part because hardware in that era could not handle the processing demands imposed by the design. By 2004, every PC had processing power to spare. Wired networks could handle video. Facebook’s design outperformed MySpace in almost every dimension, providing a relative advantage, but the company did not face the fundamental challenges that had prevailed even a decade earlier. Engineers had enough processing power, storage, and network bandwidth to change the world, at least on PCs. Programming still rewarded genius and creativity, but an entrepreneur like Zuck did not need a team of experienced engineers with systems expertise to execute a business plan. For a founder in his early twenties, this was a lucky break. Zuck could build a team of people his own age and mold them. Unlike Google, Facebook was reluctant to hire people with experience. Inexperience went from being a barrier to being an advantage, as it kept labor costs low and made it possible for a young man in his twenties to be an effective CEO. The people in Zuck’s inner circle bought into his vision without reservation, and they conveyed that vision to the rank-and-file engineers. On its own terms, Facebook’s human resources strategy worked exceptionally well. The company exceeded its goals year after year, creating massive wealth for its shareholders, but especially for Zuck. The success of Facebook’s strategy had a profound impact on the human resources culture of Silicon Valley startups.

In the early days of Silicon Valley, software engineers generally came from the computer science and electrical engineering programs at MIT, Caltech, and Carnegie Mellon. By the late seventies, Berkeley and Stanford had joined the top tier. They were followed in the mid-nineties by the University of Illinois at Urbana-Champaign, the alma mater of Marc Andreessen, and other universities with strong computer science programs. After 2000, programmers were coming from just about every university in America, including Harvard.

When faced with a surplus for the first time, engineers had new and exciting options. The wave of startups launched after 2003 could have applied surplus processing, memory, storage, and bandwidth to improve users’ well-being and happiness, for example. A few people tried, which is what led to the creation of the Siri personal assistant, among other things. The most successful entrepreneurs took a different path. They recognized that the penetration of broadband might enable them to build global consumer technology brands very quickly, so they opted for maximum scale. To grow as fast as possible, they did everything they could to eliminate friction like purchase prices, criticism, and regulation. Products were free, criticism and privacy norms ignored. Faced with the choice between asking permission or begging forgiveness, entrepreneurs embraced the latter. For some startups, challenging authority was central to their culture. To maximize both engagement and revenues, Web 2.0 startups focused their technology on the weakest elements of human psychology. They set out to create habits, evolved habits into addictions, and laid the groundwork for giant fortunes.

The second important change was philosophical. American business philosophy was becoming more and more proudly libertarian, nowhere more so than in Silicon Valley. The United States had beaten the Depression and won World War II through collective action. As a country, we subordinated the individual to the collective good, and it worked really well. When the Second World War ended, the US economy prospered by rebuilding the rest of the world. Among the many peacetime benefits was the emergence of a prosperous middle class. Tax rates were high, but few people complained. Collective action enabled the country to build the best public education system in the world, as well as the interstate highway system, and to send men to the moon. The average American enjoyed an exceptionally high standard of living.

Then came the 1973 oil crisis, when the Organization of Petroleum Exporting Countries initiated a boycott of countries that supported Israel in the Yom Kippur War. The oil embargo exposed a flaw in the US economy: it was built on cheap oil. The country had lived beyond its means for most of the sixties, borrowing aggressively to pay for the war in Vietnam and the Great Society social programs, which made it vulnerable. When rising oil prices triggered inflation and economic stagnation, the country transitioned into a new philosophical regime.

The winner was libertarianism, which prioritized the individual over the collective good. It might be framed as “you are responsible only for yourself.” As the opposite of collectivism, libertarianism is a philosophy that can trace its roots to the frontier years of the American West. In the modern context, it is closely tied to the belief that markets are always the best way to allocate resources. Under libertarianism, no one needs to feel guilty about ambition or greed. Disruption can be a strategy, not just a consequence. You can imagine how attractive a philosophy that absolves practitioners of responsibility for the impact of their actions on others would be to entrepreneurs and investors in Silicon Valley. They embraced it. You could be a hacker, a rebel against authority, and people would reward you for it. Unstated was the leverage the philosophy conferred on those who started with advantages. The well-born and lucky could attribute their success to hard work and talent, while blaming the less advantaged for not working hard enough or being untalented. Many libertarian entrepreneurs brag about the “meritocracy” inside their companies. Meritocracy sounds like a great thing, but in practice there are serious issues with Silicon Valley’s version of it. If contributions to corporate success define merit when a company is small and has a homogeneous employee base, then meritocracy will encourage the hiring of people with similar backgrounds and experience. If the company is not careful, this will lead to a homogeneous workforce as the company grows. For internet platforms, this means an employee base consisting overwhelmingly of white and Asian males in their twenties and thirties. This can have an impact on product design. For example, Google’s facial-recognition software had problems recognizing people of color, possibly reflecting a lack of diversity in the development team. Homogeneity narrows the range of acceptable ideas and, in the case of Facebook, may have contributed to a work environment that emphasizes conformity. The extraordinary lack of diversity in Silicon Valley may reflect the pervasive embrace of libertarian philosophy. Zuck’s early investor and mentor Peter Thiel is an outspoken advocate for libertarian values.

The third big change was economic, and it was a natural extension of libertarian philosophy. Neoliberalism stipulated that markets should replace government as the rule setter for economic activity. President Ronald Reagan framed neoliberalism with his assertion that “government is not the solution to our problem; it is the problem.” Beginning in 1981, the Reagan administration began removing regulations on business. He restored confidence, which unleashed a big increase in investment and economic activity. By 1982, Wall Street bought into the idea, and stocks began to rise. Reagan called it Morning in America. The problems—stagnant wages, income inequality, and a decline in startup activity outside of tech—did not emerge until the late nineties.

Deregulation generally favored incumbents at the expense of startups. New company formation, which had peaked in 1977, has been in decline ever since. The exception was Silicon Valley, where large companies struggled to keep up with rapidly evolving technologies, creating opportunities for startups. The startup economy in the early eighties was tiny but vibrant. It grew with the PC industry, exploded in the nineties, and peaked in 2000 at $120 billion, before declining by 87 percent over two years. The lean startup model collapsed the cost of startups, such that the number of new companies rebounded very quickly. According to the National Venture Capital Association, venture funding recovered to seventy-nine billion dollars in 2015 on 10,463 deals, more than twice the number funded in 2008. The market power of Facebook, Google, Amazon, and Apple has altered the behavior of investors and entrepreneurs, forcing startups to sell out early to one of the giants or crowd into smaller and less attractive opportunities.

Under Reagan, the country also revised its view of corporate power. The Founding Fathers associated monopoly with monarchy and took steps to ensure that economic power would be widely distributed. There were ebbs and flows as the country adjusted to the industrial revolution, mechanization, technology, world wars, and globalization, but until 1981, the prevailing view was that there should be limits to the concentration of economic power and wealth. The Reagan Revolution embraced the notion that the concentration of economic power was not a problem so long as it did not lead to higher prices for consumers. Again, Silicon Valley profited from laissez-faire economics.

Technology markets are not monopolies by nature. That said, every generation has had dominant players: IBM in mainframes, Digital Equipment in minicomputers, Microsoft and Intel in PCs, Cisco in data networking, Oracle in enterprise software, and Google on the internet. The argument against monopolies in technology is that major innovations almost always come from new players. If you stifle the rise of new companies, innovation may suffer.

На страницу:
4 из 7