I am standing on a stage, behind a waist-high podium with my first name on it. To my right is a woman named Vicki; she’s behind an identical podium with her name on it. Between us is a third podium with no one behind it, just the name “Watson” on the front. We are about to play Jeopardy!
This is the National Retail Federation’s mammoth annual conference at New York City’s Javits Center, and in addition to doing some onstage moderating, I have insanely agreed to compete against IBM’s  IBM 0.07%  Watson, the cognitive computing system, whose power the company wants to demonstrate to the retailers. Watson’s defeat of Jeopardy!’stwo greatest champions is almost a year old, so I’m not expecting this to go well. But I’m not prepared for what hits me.
We get to a category called Before and After at the Movies. First clue, for $200: “Han Solo meets up with Lando Calrissian while time-traveling with Marty McFly.”
Umm … what?
Watson has already buzzed in. “What is The Empire Strikes Back to the Future?” it responds correctly.
It picks the same category for $400: “James Bond fights the Soviets while trying to romance Ali MacGraw before she dies.” I’m still struggling with the concept, but Watson has already buzzed in. “What is From Russia With Love Story?” Right again.
By the time I figure this out, Watson is on the category’s last clue: “John Belushi & the boys set up their fraternity in the museum where crazy Vincent Price turns people into figurines.” The correct response, as Watson instantly knows, is “What is Animal House of Wax?” Watson has run the category.
FUT.06.16.14 chart 1
I do get some questions right in other categories, and Watson gets some wrong, but at the end of our one round I have been shellacked. I actually don’t remember the score, which must be how the psyche protects itself. I just know for sure that I have witnessed something profound.
Realize that Watson is not connected to the Internet. It’s a freestanding machine just like me, relying only on what it knows. It has to hear and understand the emcee’s spoken words, just as I do. In addition, Watson has a built-in delay when buzzing in to answer a clue. We humans must use our prehistoric muscle systems to push a button that closes a circuit and sounds the buzzer. Watson could do it at light speed with an electronic signal, so the developers interposed a delay to level the playing field. Otherwise I’d never have a prayer of winning, even if we both knew the correct response. But, of course, even with the delay, I lost.
So let’s confront reality: Watson is smarter than I am. In fact, I’m surrounded by technology that’s better than I am at sophisticated tasks. Google’s  GOOG 0.41%  autonomous car is a better driver than I am. The company has a whole fleet of the vehicles, which have driven hundreds of thousands of miles with only one accident while in autonomous mode, when one of the cars was rear-ended by a human driver at a stoplight. Computers are better than humans at screening documents for relevance in the discovery phase of litigation, an activity for which young lawyers used to bill at an impressive hourly rate. Computers are better at detecting some kinds of human emotion, despite our million years of evolution that was supposed to make us razor sharp at that skill.
One more thing. I competed against Watson two years ago. Today’s Watson is 240% faster. I am not. And I’ll guess that you aren’t either. Most things in our world slow down as they get bigger and older: A small startup can easily grow 100% a year, but a major Fortune 500 firm may struggle to grow 5%. Technology isn’t constrained that way. Today’s systems, as awesomely powerful as they are, will be 100% more awesomely powerful in two years. In a decade they’ll be 32 times more powerful.
FUT.06.16.14 chart 2
The issue, a momentous one, is obvious. In this environment, what will be the high-value skills of tomorrow, the jobs that will pay well for us and our kids? That eternal concern increasingly comes down to this stark query: What will people do better than computers?
Several factors are combining to make the question especially urgent now. The economy’s sorry job-generating performance has left economists struggling for an explanation. For decades the U.S. economy regularly returned to pre-recession employment levels about 18 months after the recession started; this time it took 77 months. How come? Why are wages stagnating for wide swaths of the U.S. population? Could advancing technology be part of the reason?
For over two centuries the answer to that last question has been clear: No. Practically every advance in technology has sparked worries that it would destroy jobs, and it did destroy them — but it also created even more new jobs, and the improved technology made those jobs more productive and higher paying. The fears of Luddites past and present have always been unfounded. Technology has lifted living standards spectacularly.
That orthodoxy, one of the firmest in economics, is now being questioned. Larry Summers, star economist and former Treasury Secretary, delivered a significant lecture to an audience of top economists last year in which he said, “Until a few years ago I didn’t think this was a very complicated subject; the Luddites were wrong, and the believers in technology and technological progress were right. I’m not so completely certain now.”
FUT.06.16.14 chart 3
Observing the increasingly sophisticated tasks that computers are doing better than humans — and the growing percentage of men ages 25 to 54 who aren’t working — Summers announced a portentous conclusion: “This set of developments is going to be the defining economic feature of our era.” No less an authority on technology than Microsoft  MSFT -0.24%  co-founder Bill Gates agrees: “Twenty years from now, labor demand for lots of skill sets will be substantially lower. I don’t think people have that in their mental model.”
Even without reducing total jobs, technology has been changing the nature of work and the value of particular skills for over 200 years, since the dawn of the Industrial Revolution. The story so far comprises just three major turning points. At first, the rise of industrial technology devalued the skills of artisans, who handcrafted their products from beginning to end. A gunmaker carved the stock, cast the barrel, engraved the lock, filed the trigger, and painstakingly fitted the pieces together. But in Eli Whitney’s Connecticut gun factory, separate workers did each of those jobs, or just portions of them, using water-powered machinery, and components of each type were identical. Low-skilled workers were in demand, and skilled artisans weren’t.
The second turning point arrived in the early 20th century, when the trend reversed. Widely available electricity enabled far more sophisticated factories, requiring better-educated, more highly skilled workers; companies also grew far larger, requiring a larger corps of educated managers. Now the unskilled were out of luck, and educated workers were in demand. Through most of the 20th century, Americans responded by becoming better educated as technology continued to advance, producing an economic miracle of fast-rising living standards.
FUT.06.16.14 chart 4a
But then the third major turning point arrived, starting in the 1980s. Information technology developed to a point where it could take over many medium-skilled jobs — bookkeeping, back-office jobs, repetitive factory work. Those jobs diminished, and their wages stagnated. Yet at both ends of the skill spectrum, high-skill jobs and low-skill service jobs did much better. Information technology couldn’t take over the problem-solving, judging, coordinating tasks of high-skill workers; in fact it made those workers more productive by giving them more information at lower cost. And IT didn’t threaten low-skill service workers because computers were terrible at skills of physical dexterity: A computer could defeat a grand master chess champion but couldn’t pick up a pencil from a tabletop. Home health aides, gardeners, cooks, and others could breathe easy.
Until very recently that pattern held: While IT was crushing medium-skill workers, those at the two ends of the skill spectrum were safe. Now, in a rapid series of developments, we are at a fourth turning point. IT is advancing steadily into both ends of the spectrum, threatening workers who thought they didn’t have to worry.
At the top end, what’s happening to lawyers is a model for any occupation involving analysis, subtle interpretation, strategizing, and persuasion. The computer incursion into the legal-discovery process is well known. In cases around the country, computers are reading millions of documents and sorting them for relevance without getting tired or distracted. But that’s just the beginning. Computers are also becoming highly skilled at searching the legal literature for appropriate precedents in a given case, far more widely and thoroughly than people can do. Humans still have to identify the legal issues involved, but as Northwestern University law professor John O. McGinnis points out in a recent article, “Search engines will eventually do this by themselves, and then go on to suggest the case law that is likely to prove relevant to the matter.”
Advancing even deeper into the territory of lawyerly skill, computers can already predict Supreme Court decisions better than lawyers can. As such analytical power expands in scope, computers will move nearer to the heart of what lawyers do by advising better than lawyers can on whether to sue or settle or go to trial before any court and in any type of case. Companies such as Lex Machina and Huron Legal already offer such analytical services, which are improving by the day.
FUT.06.16.14 chart 5
None of this means that lawyers will disappear, but it suggests that the world will need fewer of them. It’s already happening. “The rise of machine intelligence is probably partly to blame for the current crisis of law schools” — shrinking enrollments, falling tuitions — “and will certainly worsen that crisis,” says McGinnis. With infotech thoroughly disrupting even a field so advanced that it requires three years of graduate education and can pay extremely well, other high-skill workers — analysts, managers — can’t help wondering about their own futures.
Developments at the opposite end of the skill spectrum are at least as surprising. In the physical realm, robots have been good mainly at closely prescribed, repetitive tasks — welding on an auto assembly line, for example. That’s all changing radically. Google’s autonomous cars are an obvious example, but many more are appearing. You can train a Baxter robot from Rethink Robotics to do all kinds of things — pack or unpack boxes, take items to or from a conveyor belt, carry things around, count them, inspect them — just by moving its arms and hands (“end effectors”) in the desired way. Baxter won’t hurt anyone as it hums about the shop floor; it adapts its movements to its environment by sensing everything around it, including people.
Still more advanced is a robotic hand developed by a team from Harvard, Yale, and iRobot  IRBT 2.06% , maker of the Roomba vacuum cleaner and many other mobile robots. So fine are its motor skills that it can pick up a credit card from a tabletop, put a drill bit in a drill, and turn a key. “A disabled person could say to a robot with hands, ‘Go to the kitchen and put my dinner in the microwave,’ ” one of the researchers, Harvard professor Robert Howe, recently told Harvard Magazine.
The overwhelming message seems to be that no one is safe. Technological unemployment, the 200-year-old terror that has never arrived, may finally be here. But even if that’s true — and it’s far too soon to say that it is — it will also be true that, as always, technology is making some skills more valuable and others less so. At this fourth great turning point, which skills will be the winners?
The answer is becoming clear. Think about lawyers again. Average lawyers “face a bleak future,” in McGinnis’s view. Their best chance of making a living may be “by persuading angry and irrational clients to act in their self-interest,” he says. “Machines won’t be able to create the necessary emotional bonds to perform this important service.” In addition, a few “superstars” will do well by using technology to cut their costs (they won’t need many associates) and to turbocharge their “uniquely human judgment” in highly complex cases.
It just seems common sense that the skills that computers can’t acquire — forming emotional bonds, making human judgments — will be valuable. Yet the lesson of history is that it’s dangerous to claim there are any skills that computers cannot eventually acquire. IBM is teaching Watson how to be persuasive; the initiative is called Debater. We haven’t reached the world of Her, the recent Oscar-winning movie in which a man falls in love with the operating system in his infotech devices, but the film captivated viewers and critics because it envisioned a future we can imagine.
The deeper reality may be that people will value most highly those skills that they simply insist be performed by a human, even if a computer, objectively evaluated, could do them just as well. For example, we’ll want our disputes adjudicated by human judges and juries, even if computers could weigh far more factors in reaching a decision. We’ll want to hear our diagnosis from a doctor, even if a computer supplied it, because we’ll want to talk to the doctor about it — perhaps just to talk and know we’re being heard by a human being. We will want to follow human leaders, even if a computer could say all the right words, which is not an implausible prospect.
Consider the skills in highest demand over the next five to 10 years as specified by employers in a recent survey by Towers Watson and Oxford Economics. Those skills didn’t include business acumen, analysis, or P&L management. Instead, relationship building, teaming, co-creativity, cultural sensitivity, and managing diverse employees were all near the top.
The emerging picture of the future casts conventional career advice in a new light. Most notably, recommendations that students study STEM subjects — science, technology, engineering, math — need fine-tuning. It’s great advice at the moment; eight of the 10 highest-paying college majors are in engineering, says recent research, and those skills will remain critically important. But important isn’t the same as high value or well-paid. As infotech continues its advance into higher skills, value will continue to move elsewhere. Engineers will stay in demand, but tomorrow’s most valuable engineers will not be geniuses in cubicles; rather, they’ll be those who can build relationships, brainstorm, and lead.
It’s tempting to find comfort in the notion that right-brain skills will gain value. Calculus is hard, but we all understand emotions, right? Yet not everyone will benefit. We may all understand emotions, but we won’t all want to go there. Building the skills of human interaction, embracing our most essentially human traits, will play to some people’s strengths and make others deeply uncomfortable. Those people will be in trouble.
For as long as computers have existed they’ve been scaring people, eliminating jobs, creating jobs, devaluing some skills, and exalting others. Yet it would not be correct to say of today’s situation that it was ever thus. It wasn’t. Because the growth of computing power doesn’t slow down as it gets large, we’re racing into a genuinely different future. As computers begin to acquire some of the most advanced cognitive and physical human skills, we confront a new reality. In a way that has not been true before, the central issue for the economy and for all of us who work in it will be the answer to the question: What will people do better than computers?
This story is from the June 16, 2014 issue of  Fortune.