Rob was reading this book, so I jumped in to start reading it, too. In it, Kelly posits twelve inevitable (hence the title) technological forces / trends / changes that will shape our future. He gives them odd names, so that they are all gerunds:
Becoming: everything's upgrading, so we'll always forever be newbies
Cognifying: I suspect a made up word, basically AI everywhere, even dumb ai
Flowing: everything is real-time and instant access becomes more instanter (yes, I did make up that word)
Screening: everything becomes a screen, hate this idea
Accessing: no one owns much, so the corps own the big stuff, we just rent
Sharing: no one owns much, so the corps own the big stuff, we just rent, and share it
Filtering: everything is curated, unfortunately, likely by the AI
Remixing: everyone steals from everyone else and makes a meme out of it, or at least makes things better, pretty much humankind forever
Interacting: AR / VR
Tracking: total surveillance nominally "for the benefit of citizens and consumers" but in reality to an authoritarian state
I think Kelly started reaching on these, but there's also:
Questioning: the idea that good questions are far more valuable than good answers (except that too many people don't question, don't think)
Beginning: going global
There were parts of the book that I really wanted to scream NO NO NO at. Except Kelly isn't saying "here's what I propose," he's saying, "here's what I see." Screaming "No!" at a wall of water doesn't stop the flood, building a seawall stops the worst of it. Which might have been a reason for writing and reading this book.
Worth reading. Maybe reading twice.
Our greatest invention in the past 200 years was not a particular gadget or tool but the invention of the scientific process itself.
Get the ongoing process right and it will keep generating ongoing benefits. In our new era, processes trump products.
You may not want to upgrade, but you must because everyone else is. It’s an upgrade arms race. I used to upgrade my gear begrudgingly (why upgrade if it still works?) and at the last possible moment. You know how it goes: Upgrade this and suddenly you need to upgrade that, which triggers upgrades everywhere. I would put it off for years because I had the experiences of one “tiny” upgrade of a minor part disrupting my entire working life.
[D]elaying upgrading is even more disruptive. If you neglect ongoing minor upgrades, the change backs up so much that the eventual big upgrade reaches traumatic proportions.
I can confirm this statement.
Technological life in the future will be a series of endless upgrades.
No matter how long you have been using a tool, endless upgrades make you into a newbie—the new user often seen as clueless. In this era of “becoming,” everyone becomes a newbie. Worse, we will be newbies forever. That should keep us humble. That bears repeating. All of us—every one of us—will be endless newbies in the future simply trying to keep up.
Second, because the new technology requires endless upgrades, you will remain in the newbie state. Third, because the cycle of obsolescence is accelerating (the average lifespan of a phone app is a mere 30 days!), you won’t have time to master anything before it is displaced, so you will remain in the newbie mode forever. Endless Newbie is the new default for everyone, no matter your age or experience.
We keep inventing new things that make new longings, new holes that must be filled. Some people are furious that our hearts are pierced this way by the things we make. They see this ever-neediness as a debasement, a lowering of human nobility, the source of our continual discontentment.
This discontent is the trigger for our ingenuity and growth. We cannot expand our self, and our collective self, without making holes in our heart.
A world without discomfort is utopia. But it is also stagnant.
None of us have to worry about these utopia paradoxes, because utopias never work. Every utopian scenario contains self-corrupting flaws.
The flaw in most dystopian narratives is that they are not sustainable. Shutting down civilization is actually hard.
Nature finds away. Especially when said nature contains people.
The problems of today were caused by yesterday’s technological successes, and the technological solutions to today’s problems will cause the problems of tomorrow.
The problem with constant becoming (especially in a protopian crawl) is that unceasing change can blind us to its incremental changes. In constant motion we no longer notice the motion.
The disruption ABC could not imagine was that this “internet stuff” enabled the formerly dismissed passive consumers to become active creators.
The total number of web pages, including those that are dynamically created upon request, exceeds 60 trillion. That’s almost 10,000 pages per person alive.
What we all failed to see was how much of this brave new online world would be manufactured by users, not big institutions.
The audience was a confirmed collective couch potato, as the ABC honchos assumed. Everyone knew writing and reading were dead; music was too much trouble to make when you could sit back and listen; video production was simply out of reach of amateurs in terms of cost and expertise.
One study a few years ago found that only 40 percent of the web is commercially manufactured. The rest is fueled by duty or passion.
In fact, the business plans of the next 10,000 startups are easy to forecast: Take X and add AI. Find something that can be made better by adding online smartness to it.
The list of Xs is endless. The more unlikely the field, the more powerful adding AI will be.
When you type “Easter Bunny” into the image search bar and then click on the most Easter Bunny–looking image, you are teaching the AI what an Easter Bunny looks like.
My prediction: By 2026, Google’s main product will not be search but AI.
Cloud computing empowers the law of increasing returns, sometimes called the network effect, which holds that the value of a network increases much faster as it grows bigger. The bigger the network, the more attractive it is to new users, which makes it even bigger and thus more attractive, and so on. A cloud that serves AI will obey the same law. The more people who use an AI, the smarter it gets.
As a result, our AI future is likely to be ruled by an oligarchy of two or three large, general-purpose cloud-based commercial intelligences.
Because of a quirk in our evolutionary history, we are cruising as the only self-conscious species on our planet, leaving us with the incorrect idea that human intelligence is singular.
One of the advantages of having AIs drive our cars is that they won’t drive like humans, with our easily distracted minds.
Imagine we land on an alien planet. How would we measure the level of the intelligences we encounter there? This is an extremely difficult question because we have no real definition of our own intelligence, in part because until now we didn’t need one.
Our most important mechanical inventions are not machines that do what humans do better, but machines that can do things we can’t do at all. Our most important thinking machines will not be machines that can think what we think faster, better, but those that think what we can’t think.
Today, many scientific discoveries require hundreds of human minds to solve, but in the near future there may be classes of problems so deep that they require hundreds of different species of minds to solve. This will take us to a cultural edge because it won’t be easy to accept the answers from an alien intelligence. We already see that reluctance in our difficulty in approving mathematical proofs done by computer.
We’ll spend the next three decades—indeed, perhaps the next century—in a permanent identity crisis, continually asking ourselves what humans are good for. If we aren’t unique toolmakers, or artists, or moral ethicists, then what, if anything, makes us special?
We aren’t giving “good jobs” to robots. Most of the time we are giving them jobs we could never do. Without them, these jobs would remain undone.
It is a safe bet that the highest-earning professions in the year 2050 will depend on automations and machines that have not been invented yet. That is, we can’t see these jobs from here, because we can’t yet see the machines and technologies that will make them possible. Robots create jobs that we did not even know we wanted done.
The one thing humans can do that robots can’t (at least for a long while) is to decide what it is that humans want to do. This is not a trivial semantic trick; our desires are inspired by our previous inventions, making this a circular question.
This is not a race against the machines. If we race against them, we lose. This is a race with the machines.
It is inevitable. Let the robots take our jobs, and let them help us dream up new work that matters.
We can’t stop massive indiscriminate copying. Not only would that sabotage the engine of wealth if we could, but it would halt the internet itself.
The initial age of computing borrowed from the industrial age. As Marshall McLuhan observed, the first version of a new medium imitates the medium it replaces. The first commercial computers employed the metaphor of the office.
Then, in the second age, along came the web, and very quickly we expected everything the same day.
Our cycle time jumped from batch mode to daily mode. This was a big deal.
Now in the third age, we’ve moved from daily mode to real time.
In predigital days I bought printed books long before I intended to read them. If I spied an enticing book in a bookstore, I bought it.