UC San Diego student offers ways to be ‘robot-proof’ in the age of AI – San Diego Union-Tribune
Vivienne Ming, a UC San Diego alumna and longtime machine studying researcher, argues in her new e-book, “Robot-Proof,” that the actual threat of synthetic intelligence is not simply job displacement — it is folks utilizing AI passively and shedding the expertise that make them precious.
Ming, a theoretical neuroscientist, inventor and entrepreneur who graduated from UCSD in 2000 with a bachelor’s diploma in cognitive science, offers a playbook for changing into “robot-proof” through the use of AI inquisitively, difficult it and constructing the human strengths that AI cannot replicate.
“We should be careful that what we’re building doesn’t automate away the very capacities that make us human,” mentioned Ming, who has additionally superior in psychology levels from Carnegie Mellon University. She based Berkeley-based internet software developer Socos Labs in 2011 and is chief scientist for Possibility Sciences, a gaggle that works to slim what it calls the “possibility gap — the distance between what we can imagine and what our systems can reliably activate.”
As AI instruments have turn into half of each day life, many Americans say they’re extra involved than excited. And Ming agrees that there are so much of causes people ought to be cautious of AI.
But she says one of her greatest issues is that the machine studying business is shifting in the fallacious path, pouring sources into making AI autonomous smarter and extra whereas neglecting the human aspect of the equation.
Here she discusses her e-book and the way folks can work with AI in a approach that might finest serve humanity.
Q. How is the AI revolution totally different from earlier technology-driven transformations — resembling the Industrial Revolution, the microprocessor/private computing revolution and the rise of the web — that quickly reshaped how folks stay and work?
TO. I even have an entire chapter titled “This is not the Industrial Revolution.” There’s this lazy analogy folks attain for: “Oh, people complained about calculators, too,” and due to this fact that is all simply the identical cycle repeating. But it is not a clear equivalence. Calculators did not cease you pondering — you stopped doing the low-level computation and then you definately did different issues with these computations. That sort of software nonetheless leaves your cognition engaged.
What’s totally different now could be that trendy agentic programs will fortunately do all of it. And the hazard is that folks begin disengaging in a approach we will truly measure. If you have a look at these applied sciences over time — printing, computer systems, the web — we do see refined adjustments in cognition.
But extra just lately, with GPS and algorithmic feeds, these adjustments have gotten measurable and, frankly, extra regarding. People are altering how they suppose after they use these instruments in ways that scare me.
So one large distinction is that AI is hitting us proper in our cognitive core. It’s not automating bodily exercise. It’s not even simply automating a low-level cognitive process that is deeply boring. It can automate the complete course of — and that is traditionally new. And which means we now have to be much more considerate about what we automate vs. what we increase.
Q. You’ve mentioned you wrote this e-book as a result of as we speak’s AI coverage debates typically miss what’s finest for folks. What are we getting fallacious?
TO. On one aspect you’ve gotten the AI utopianists — the “wave the AI wand and everything gets perfect” crowd. “You’ll never have to work again. There’ll never be cancer.” It’s absurd. I name it the creativeness illness: “I can imagine a world in which everything’s perfect, therefore it will be perfect.” And if you add trillions of {dollars} of funding strain on high of that, it will get even worse as a result of people cannot take care of that sort of strain.
Then on the different aspect you’ve gotten the dystopian story: “AI is going to destroy us all, it’s going to take all the jobs, it’s going to ruin everything.” And I’ve been constructing these items for 30 years. I’ve used it for my son’s diabetes. I’ve used it for refugees. Bipolar dysfunction. Postpartum despair. Perimenopausal despair. I constructed literal cyborgs early on — utilizing AI to assist enhance cochlear implants so folks may hear speech in noise. So I do not purchase the simplistic dystopia both.
The drawback is, virtually no person is speaking about what I feel is the most vital body. If you have a look at AI as an astonishing cognitive software, the query turns into “How does it make human beings better?” And then, “What does that imply for education, workforce policy, infrastructure — everything?”
Q. In the e-book, you describe experiments you ran that had been designed to discover which kind of folks use AI most successfully. What did you uncover?
TO. We ran this experiment the place small teams of college students from UC Berkeley had an hour to make 10 predictions about the future. For instance: What will the value of oil be in six months? Humans are horrible at that — unsurprisingly. We’re no good at making predictions about issues we do not know something about. The smallest open-source mannequin we used was higher than the finest human by so much. And the larger, extra refined the mannequin, the higher it did.
Then we checked out what I name hybrid intelligence — what occurs if you put folks and machines collectively. And we received two very totally different patterns. One group — what we referred to as the “automators” — would mainly say “Gemini, GPT, give me the answer and then submit it.” They’re not collaborating. I put an electroencephalogram, or EEG, on a pair of them and in contrast to folks reasoning on their very own — and even simply utilizing Google. There was dramatically much less cognitive exercise.
But then there was one other group — about 10% of the Berkeley college students — who grew to become what we referred to as “cyborgs.” They would push again: “What about this?” The AI would say “But the data…” they usually’d say “OK, not that — what about this instead?”
There’s a back-and-forth the place they actively discover why they may be fallacious. They do not simply settle for the reply. Those cyborg groups did higher than the finest folks they usually did higher than the finest fashions. In truth, three college students with no prior data carried out comparably to predict markets — like the sort the place tens of hundreds of folks have cash on the line. That’s actually thrilling.
The catch is, it was a small share. Which means it is not sufficient to say AI makes folks higher. We have to ask “What makes the cyborg pattern happen and how do we pull more people into it?”
Q. You’ve mentioned it does not matter a lot which AI mannequin folks use, it solely issues how they use it. What does that indicate for the AI business, which is spending tons of capital to construct higher fashions?
TO. That’s an enormous deal, as a result of proper now, practically each main firm is optimizing for autonomy.
Read the mannequin playing cards, learn the benchmarks: it is all about what the system can do by itself. AI optimized just for autonomy is a useless finish for humanity. If the aim is to make folks higher, then we should always be constructing programs designed round productive friction — programs that problem you, that assist you to discover, that do not simply hand you the reply. But these programs would rating worse on autonomy benchmarks by definition, as a result of they don’t seem to be doing the work alone.
So from an business standpoint, we’re measuring the fallacious factor. We’re constructing towards the fallacious finish state. And we’re leaving the most dear use case — the one that really improves human functionality — underdeveloped.
Q. You’ve mentioned your greatest worry is not a sci-fi takeover, it is a future in which individuals depend on AI passively an excessive amount of. Can you clarify?
TO. Cognitive decline is a long-term phenomenon. It’s not like “Oh my God, my child asked AI for an answer and now they’re doomed.” This is extra like a way of life problem. And it is not fallacious that folks use instruments in shallow ways typically — we did not evolve to be deep all the time. That would be exhausting.
The concern is what occurs when shallow use turns into the default and there is little or no cognitive engagement. In our experiment, the “automators” had been mainly utilizing AI in its place. They’d get a solution and submit it.
And you see it outdoors the lab, too — folks scrolling, folks consuming outputs, by no means actually asking “Why do I believe this? What’s missing? What’s the alternative?”
So what does cognitive decline appear to be? It can appear to be disengagement. It can appear to be shedding the behavior of wrestling with uncertainty. It can appear to be changing into much less ready — or much less prepared — to verify your personal pondering. Over time, that is an actual loss.
Q. What does it appear to be to use AI constructively, to turn into a “cyborg” or an AI-powered human as an alternative of an “automator” — one who is determined by AI an excessive amount of, which leads to cognitive decline?
TO. The key’s that it is solely when people and machines are basically working collectively — the place the human challenges the AI and the AI challenges the human — that you simply get the dynamic that produces higher outcomes than both alone.
We tried a easy intervention: We fine-tuned a small open-source mannequin to not give solutions. It would ask questions and push college students as an alternative. The college students hated it. They had been like, “Stop being Socrates — just tell me the price of oil!”
But twice as many of them switched into cyborg mode and achieved superhuman efficiency. That’s the trace: The aim is not consolation. The aim is productive friction. Use AI to problem you, not simply to reward your first thought.
A sensible instance is what I name the “Nemesis prompt.” I used it whereas writing. I did not let the AI write chapters. I wrote the chapter, then I’d say one thing like “You are my nemesis — my lifelong enemy. You’ve found every mistake I’ve ever made. Here’s the draft. Tell me why I’m wrong, in detail and how to make it better.”
Then you possibly can flip it: “Now you’re a bored reader. Tell me why this doesn’t matter to you and how to make it connect without dumbing it down.”
That’s a really totally different relationship with the software than “Give me the answer.”
Q. What recommendation do you’ve gotten for folks and academics who need to put together youngsters to thrive in the age of AI?
TO. One factor I say in the e-book is, our training system has largely been constructed round well-posed issues — issues the place we already perceive the query and we already know the solutions, or the components that will get you to the reply. Then we grade youngsters on how effectively they reproduce the “right” solutions.
I do not want that anymore. I’ve all these solutions without cost in my pocket — higher, cheaper, quicker than a human may give them. That does not imply youngsters should not study fundamentals; they’re nonetheless vital. But the whole effort adjustments. What’s left is our capacity to discover the unknown — the ill-posed issues. To try this, youngsters want to be prepared to be fallacious typically. They want curiosity. They want mental humility — the capacity to hear “you’re wrong” and reply with curiosity as an alternative of collapse. They want perspective-taking — understanding what different folks suppose and what different folks take into consideration what you suppose.
Some of that is early-life growth: wealthy dialog, studying, enriched environments, various experiences — these assist working reminiscence and the foundations of fluid intelligence. But after that, so much of it turns into upkeep and observe. And you are able to do very concrete issues. Reward questions, not simply solutions. Build a tradition the place asking is valued. Encouragement productive failure.
Try a “failure diary” — not to glorify failure however to hyperlink errors to studying and progress. Help youngsters see errors as data. Then reinforce it each day: Use GPS to get round, however do not give up to it. Check the route and ask “Do I know better?” “Why this way?”
Keep the comfort and hold your mind on-line.
— La Jolla Light employees contributed to this text, which first appeared in the UC San Diego Today publication by UCSD Communications. It is republished right here with permission.
