You can learn a surprising amount by kicking things. It’s an epistemological method you often see deployed by small children, who target furniture, pets, and their peers in the hope of answering important questions about the world. Questions like “How solid is this thing?” and “Can I knock it over?” and “If I kick it, will it kick me back?”
Kicking robots is something of a pastime among roboticists. Although the activity generates anxiety for lay observers prone to worrying about the prospect of future retribution, it also happens to be an efficient method of testing a machine’s balance. In recent years, as robots have become increasingly sophisticated, their makers have gone from kicking them to shoving them, tripping them, and even hitting them with folding chairs. It may seem gratuitous, but as with Dr. Johnson’s infamous response to Bishop Berkeley’s doctrine of immaterialism, there’s something grounding about applying the boot. It helps separate what’s real from what’s not.
All of this is going through my head in April, when I find myself face-to-face with a robot named Apollo. Apollo is a humanoid: a robot with two arms and two legs, standing five feet eight inches tall, with exposed wires, whirring motors, and a smooth plastic head resembling a mannequin’s. Like so many humanoids, Apollo exemplifies the uncanny, hyperreal nature of modern robotics, simultaneously an image from science fiction and a real, tangible machine.
Robots like Apollo are seemingly everywhere these days. There are headlines about Chinese bots running half marathons, ominous videos of muscled humanoids twitching on gantries, clips of robot fight clubs. Sometimes you get the feeling that these machines constitute a fifth column of sorts—a not-so-secret cell, growing in number, biding its time, preparing for the uprising. Economists are looking forward to it. Around the world, they point out, population growth is slowing and labor shortages are spreading. Without humanoids to step into the breach, and quickly, the global economy could descend into chaos. Bank of America forecasts that there will be at least a million humanoid robots shipped annually by 2035, while Morgan Stanley predicts that more than a billion will be in use by 2050. If all goes according to plan, robotics could constitute the largest industry in the world, generating annual revenue upwards of $5 trillion. Elon Musk, that sage of understatement, claims that Tesla’s own Optimus robot will one day “be more productive than the entire global economy.”
Apollo, October 2025
Apollo’s creator, the U.S. startup Apptronik, is a frontrunner in this emerging industry. The company says it’s building the first general-purpose commercial robot, a machine that will one day be able to take on any type of physical labor currently performed by humans, whether cleaning houses or assembling cars. Not knowing what to believe from what I’ve seen on social media, I’ve traveled from London to Austin, Texas, to see Apollo for myself. Against prophecies of doom and salvation, “stability testing” seems like a crude way to gauge the technology’s development, but it’s a good place to start.
As I square up to Apollo in a plexiglass arena, my first instinct is, naturally, to raise a foot. But the kick test is too dangerous for visiting journalists, I’m told. Instead, someone hands me a wooden pole with a piece of foam taped around one end and mimes poking the machine in its chest. Ah, I think, the scientific method. In front of me, as various motors rev up to speed, the robot shuffles in place, looking like an arthritic boxer readying for a fight. On the other side of the plexiglass, a group of engineers chat casually with one another and glance over at a bank of monitors. One of them gives me a thumbs-up. Have at it.
My first shove is hesitant. I’ve been told that the prototype in front of me is worth around $250,000, and while breaking it would make for a good story, it would also be the end of my visit to Apptronik. In response to my prod, the bot merely teeters. It’s heavier than I’d expected, around 160 pounds. It feels, well, like a person. “Oh, you can do it harder than that,” says an engineer, and I jab forward again. Nothing. Apollo is still trotting on the spot. Fine, I think, I’ll give it a real push. Drawing back, I grip my makeshift spear and strike the robot hard in the chest. It staggers backward, stamping its feet, flinging its arms toward me in an appealingly human gesture. I’m struck by a flash of involuntary alarm, whether out of sympathy for a fellow being or fear of an expensive accident I can’t say. For a moment, the robot looks like it might fall, then regains its balance and returns to its position in front of me. I look at its blank face with wonder and disquiet. It seems pretty real to me.
The current era of humanoid hype has a clear inauguration date. On August 19, 2021, at Tesla’s AI Day, a press jamboree to promote the company’s latest tech and future plans, Musk took the stage and announced that he was building a robot. The machine—then known as the Tesla Bot, now called Optimus—would have “human-level hands,” he promised. It would be able to perform “dangerous, repetitive, boring tasks” and follow simple commands like “Go to a store and get me the following groceries.” Lacking a prototype to show the audience, Musk had arranged for a man in a spandex robot costume to stand in for the Tesla Bot. The figure mounted the stairs with cartoonishly stiff movements before breaking into a frenetic, seemingly improvised dance as dubstep blared from nearby speakers. After the dancer exited to scattered applause, Musk shuffled back onstage. “Obviously that was not real,” he said.
Musk’s showcase captured perfectly the power of spectacle to mask technological shortcomings and bridge the gap between expectation and reality. But for those who’d been toiling away on humanoid robots for decades, Musk’s announcement was something more than a publicity stunt. “After Tesla Bot, the whole world sort of woke up to humanoids,” Jeff Cardenas, Apptronik’s co-founder and CEO, told me. The number of people working on the technology prior to Tesla’s AI Day pronouncement “could fit in a small room,” but Musk transformed the industry almost overnight, even if all the public had seen of the Tesla Bot was a slide deck and a gyrating man in a robot costume.
After my bout with Apollo, Cardenas and I headed to Apptronik’s conference room. Cardenas is tall and handsome, one of those men who manages to straddle the jock-nerd divide, and whose behavior can be read as either supremely confident or socially oblivious. When he holds my gaze—which seems like all the time—I can’t decide whether he’s trying to intimidate me or win me over. I get the feeling it’s a bit of both.
In my decade of reporting on tech, the posture of executives has shifted from one of assumed friendliness to one of wariness and even antagonism. This is in part a result of tech’s embrace of alpha-male culture, but it’s also a defensive response to public disappointment with and mockery of the industry’s achievements (crypto, the blockchain, NFTs, etc.). I’m curious to see how Cardenas will try to sell me on humanoids—a risky technology that could be genuinely transformational, but that relies heavily on hype to rally interest and investors.
To get a better idea of where Cardenas stands, I ask him what he thinks of Musk. He admits that the Tesla CEO doesn’t necessarily have “a consistent track record on timelines” (at the time of writing, reports suggest Tesla has slashed its production targets from five thousand to two thousand units for this year) but says that Musk has an instinct for knowing when technologies and markets align, proving himself with electric vehicles, reusable rockets, and satellite internet. “I thought Elon was a really good person to be an initial major evangelist,” Cardenas tells me. “If he was jumping into humanoids, certainly there had to be some reason why.”
And there was, indeed, a very clear reason: artificial intelligence. Engineers will tell you that there are a number of material factors underpinning the current boom in humanoids. Electric motors have become cheaper and more powerful; digital sensors are faster and more reliable; and there have been downstream benefits to battery performance thanks to investment in electric cars and drones. But the single most important factor—the one at the nexus of hype and potential—is the growth of AI and, in particular, the promise of deep learning. It’s this technology—the use of algorithms to mine vast stores of data for patterns—that has powered the development of large language models like ChatGPT, and that roboticists hope will push their own machines into the next stage of development.
Instead of decoding the rules of human language from piles of text, these engineers are trying to emulate human dexterity by analyzing stores of video and sensor data. This training data is either generated by humans who control robots like puppets (a practice known as “teleoperation,” or “teleop”) or taken from virtual training environments, where tasks can be attempted using robot models over and over, at accelerating rates. The resulting systems go by a few different names but are most commonly known as vision-language-action models (VLAs) or large behavior models (LBMs). The hope is that they will provide brains for what are already pretty capable bodies.
The early fruits of this approach are just beginning to appear. In June, Figure AI, a humanoid-robotics startup, released a sixty-minute video of its prototype humanoid nimbly sorting variously sized parcels, the kind of work that’s easy for humans but challenging for machines. Figure AI’s CEO, Brett Adcock, told Bloomberg that the robot’s operations were running on a single AI neural network. “It’s taking camera frames in; it’s outputting the actions,” he said. This is a major change from robot control systems that resemble a set of step-by-step instructions: Go to X position and close grabber. Move to Y position and release grabber. Instead, Adcock and his peers are creating systems that generalize rules from data, just like chatbots do. Thanks to demos like Figure AI’s, advocates claim that robotics is headed toward its own “ChatGPT moment,” a technical breakthrough that attracts investment and accelerates adoption. For others in the industry, though, the very phrase is anathema. It suggests that hype is clouding reality and that a bubble of rising stock prices and funding rounds is about to burst, setting back public confidence in robotics for years.
In other words: Are humanoids more like Facebook, which was once so committed to virtual reality that it rebranded itself as Meta, a pivot that failed utterly after the company’s augmented- and virtual-reality division lost more than $45 billion in a matter of years? Or are they more like self-driving cars, which were a similarly flashy locus for investment and criticism but now seem to be transforming into viable businesses? It took Waymo, a subsidiary of Alphabet, about six years for its cars to rack up their first million miles of autonomous driving, but as of this summer its fleet drives two million miles every week.
In my conversation with Cardenas, we discussed the different ways robots already work alongside us. When I was catching my flight to Texas, for instance, I watched a floor-cleaning machine the size of a garbage bin sweep through Heathrow Airport. An older couple stopped and pointed as it trundled past, but most travelers ignored it. Then, after landing in Austin, I walked past a “robot barista” making coffee. The operation was pure spectacle: the robot was just a mechanical arm that held a cup underneath the nozzle of a machine. Here, I thought, are the two strands of robotics: one useful and invisible, the other theatrical and redundant.
There is a basic challenge in robotic design that I’ve come across time and time again. I refer to it as the dishwasher problem. It’s like this: Imagine you’re designing a robot to clean and dry dishes the way a human does. Think of all the difficulties you need to overcome: Your robot needs hands and arms that can manipulate items of different shapes and sizes, and a vision system to identify muck and grime. It needs to be strong enough to grasp slippery things, sensitive enough to handle breakables, and dexterous enough to clean the insides of items like mugs and graters. Alternatively, you could build a waterproof box, fill it with jets and sprays, and stuff everything inside. That’s a much simpler way to tackle the problem, and one that has gifted humanity the dishwasher.
Criticism of humanoids within the robotics industry often follows a similar logic. Why go to all the trouble of mimicking nature’s blueprints when our own designs can do the job more efficiently? We don’t make planes that fly by flapping their wings or ships that wriggle through the water like tuna. So why make things harder for ourselves?
The answers engineers have given me vary from the spiritual to the pragmatic. The more philosophical among them point out that humans have been making simulacra of our bodies for millennia. There are four-thousand-year-old “living statues” from ancient Egypt that operate by string, for example, while in the Renaissance, humanoid automatons were a novelty for the rich: courtiers made with wooden gears posed for audiences and played the flute. Such history implies that building humanoids is a cultural imperative, an instinct that parallels biological reproduction. But most engineers tend to be more practical. The world is built for humans, they say. Our environment is full of steps and handles for feet and hands, so any machine meant to operate alongside us must possess the same features.
Jonathan Hurst, the co-founder and chief robot officer at Agility Robotics, makes a more fundamental claim. Even if you were to start from scratch when designing a machine to handle human labor, he says, you would still end up re-creating the Homo sapiens blueprint simply as a matter of convergent evolution. Form must follow function, and if that function is to replace humans, it helps to be human-shaped.
Take, for example, warehouse work, which is usually seen as the beachhead for humanoids owing to its relative simplicity and recruitment challenges. Unless you want to completely redesign your warehouse, you need to build machines that are able to navigate between tightly packed shelves while also lifting and grabbing objects, and that can maintain their balance while their center of gravity shifts from high to low. If you were taking a simple approach, you might design something resembling a coatrack on wheels, with platforms that move up and down along a central tower. But for that machine not to topple over when it lifts a heavy object to an upper shelf, it would need a large base that would make it much less maneuverable. The best solution for this work, Hurst and others argue, is a bipedal, bimanual robot that can turn on the spot and adjust its center of gravity while carrying heavy items. Even the addition of a robot “head” makes sense, as you need to put cameras and sensors somewhere high to oversee the work. As Hurst puts it: “We’re not copying a person. We’re doing it for actual reasons.”
As I walk around Apptronik’s factory, I watch engineers fit together the pieces of this human puzzle. There are racks of arms and shelves of legs, metal gears hanging from hooks, and trays of actuators ready to be assembled. At one table it looks as though an anatomy lesson is taking place. The limbs, head, and chest of an Apollo unit have been deconstructed and laid flat, with its wires and cables teased apart like muscle and nerves. As I watch, an engineer activates the array, and the limbs, fixed to the table, start to twitch in place. I’m reminded of historical accounts detailing the first experiments with electricity, when scientists used rudimentary batteries to make the legs of dead frogs spasm, believing that the mysterious force they’d discovered might be the animating principle of life itself. Watching Apptronik’s engineers gathering around their creation, I feel a similar sense that the numinous is at work. Not many in the robotics industry claim to be playing God, but at times the sheer ambition of the work transcends simple engineering.
Though Cardenas acknowledges the practical reasons for building humanoids, his own motivation goes further. It’s not just about making machines fit our environment, he said, but radically transforming the type of work they can perform. It’s about changing what a robot is. Traditional industrial machines, he explained, are static, expensive, and dangerous. These are the sorts of robots you see assembling cars in B-roll footage on the news: huge mechanical arms administering spot welds and paint jobs with unimpeachable rigor and precision. But the robot of the future, Cardenas says, is a different beast. Thanks to its AI training, it doesn’t rely on detailed instructions but is responsive and dynamic, able to recognize tools and environments and follow natural-language commands. It’ll cost as much as a sedan, work safely alongside us, and because it’s shaped like a human, it’ll do whatever we can. “We believe that the humanoid is like the personal computer,” he says. “It will be the robot that has the highest potential to scale.”
Later, as we sip coffee and talk more about Apptronik’s plans in the conference room, Cardenas begins to shift into a mode of utopian reasoning that you often hear in the tech world. Until then, he’d been practical, explaining actuators and motors, but now he was navigating a realm where numbers multiply frictionlessly and progress is inevitable. “Let’s say, on the whole, for every average person, fifty years from now, robots give you five years of improved quality of life,” he says. “Five years of improved life per person is forty billion years of collective energy that you could pour back into the sum total of the human experience. What do we do with a billion years, let alone forty billion?”
I nod along, aware that engaging too closely with these arguments was hardly what he wanted. I think to myself: This is the real draw of the humanoid. It is mystical. It stokes the imagination, just like in earlier ages, for if we can replicate ourselves without defects, what can’t we achieve? The notion clearly motivates Cardenas, instilling in him a sense of fervor and urgency. As our meeting wraps up and his PR team tries to drag him to his next appointment, he can’t stop talking—he’s halfway out the door before he sticks his head back in the room to make one last point, to try and make me see. “As humans, we have conceived of humanoids longer than we’ve conceived of computers,” he says. “To me, the story is: Wow, all this time we’ve been thinking about this. Now here we are at the front end of it.” He stops, hand on the doorframe, then asks aloud the questions that plainly occupy him: “Where does it go? What does that look like?” He stares past me as if into some unknowable future, then smiles, shrugs, and walks back into the factory.
In the genesis myth of ancient Sumer, the gods made humans with a clear, if depressing, purpose: to be their servants. Once molded from clay, the first humans tilled the fields, built temples, and provided sacrificial lambs for the pantheon’s delight. Advocates of our robot future have similarly mundane plans for our mechanical progeny. They’ll work in fields, factories, and warehouses, of course, but just as in the Sumerian creation story, they’ll also wait on us, following a cultural tradition of robot butlers from Rosey the Robot in The Jetsons to Robin Williams’s noble robo-servant Andrew in Bicentennial Man. Even today, the robot butler is shorthand for the elusive Good Timeline, emblem of a future in which the luxury of household staff is made accessible and ethical thanks to the wonders of technology.
No humanoid startup draws more heavily on this vision than 1X Technologies, a Palo Alto–based company founded by the Norwegian roboticist Bernt Børnich (who, according to Cardenas, is one of the industry’s “true believers”). Last year, 1X unveiled NEO, a “humanoid robot for the home” that is still in the prototype stage and undergoing testing in the homes of 1X employees. Instead of the industrial look favored by many rival robots, NEO is notable for its aggressively nonthreatening appearance. The machine is clad in a beige knitted bodysuit (this has the practical benefit of covering up pinch points, the gaps between joints liable to trap human fingers) and has a diminutive head fronted by a smooth black visor. In one promo video, NEO is shown performing household chores in minimalist homes, carrying laundry and pushing a vacuum cleaner while its owners do something inscrutable with an iPad. The overall mood is cozy, serene, and neutered: a weighted blanket for the robot world.
Børnich’s decision to build humanoids for the home hinges on the technical premise of the ChatGPT moment. If data is key to creating AI control systems, and if robot butlers are the end goal of the industry, then, Børnich says, they need to begin testing in these environments as soon as possible. This in itself is not crazy. Boston Dynamics, perhaps the most established and best-known robotics firm in the world, is building LBMs for its humanoid, Atlas, by collecting video and sensor data of robots doing things like slicing fruit or installing bike parts. The company’s robots are controlled remotely, either in virtual environments or in real life, using teleoperation systems. Training data is then used to create an AI model that can carry out these tasks autonomously. It’s similar to how a chess-playing AI is first trained on games played by humans, then uses this information to create its own strategies and moves. As Scott Kuindersma, a vice president of robotics research at Boston Dynamics, told me, “If you have a robot and a teleop system that you can use to repeatedly produce a behavior on the robot, then we basically have the technology to turn the crank and turn that into an autonomous policy.” But, he added, that doesn’t mean the robot will perform its task with perfect accuracy or reliability, and it certainly doesn’t mean robots are ready to be placed in the home.
With AI in the form of chatbots, plausible mistruths and inaccuracies usually have a limited scope for harm, but with a domestic robot, mistakes could be catastrophic. Imagine you tell your machine butler, “Pour me a cup of tea,” and it pours boiling water into your baby’s sippy cup rather than your mug. Or you tell it, “Put my laundry in the washing machine,” and it grabs a hamper of clothes in which your cat is hiding and fires up the express cycle. There are obvious security and privacy concerns, too. Any robot in your home would have an array of cameras, microphones, and sensors, becoming an alluring target for hackers, while the robots themselves would need to be accessed remotely for troubleshooting, potentially giving attackers physical access to your house. For these and other safety reasons, skeptics in the industry allege that companies promising to deliver domestic robots in a matter of years are overly optimistic. “This is a very high uncertainty prediction for me,” Kuindersma tells me. His colleague, Marc Theermann, the company’s chief strategy officer, says: “I’m not sure if [we] even believe that humanoids in the home is a thing in the foreseeable future.”
Because of the high-risk, high-reward nature of 1X’s plans, I’d been particularly keen to see NEO for myself. But the firm was frustratingly elusive. Communication was fragmented, my requests to visit the company’s headquarters were rejected, and a scheduled interview with Børnich himself was canceled at the last minute while I waited on the line. Such behavior tends to make a journalist only more determined, but when I traveled to San Francisco to visit 1X’s competitors and told the company’s representatives I was in the area, ready to chat anywhere, anytime, I was rebuffed for good. “Thanks again for the dialogue,” said the head of communications. “After careful consideration, we’ve decided not to move forward with the story at this time.”
In my experience, when companies promising transformational new technology decline to show it to the press, it’s not a great sign. Thankfully, 1X does talk to some favored sources, mostly YouTubers and podcasters who focus on our bright technological future, so I can tell you that Børnich comes across as warm and friendly, with a goofy smile. He looks like a retired skateboarder, with long blond hair and a wardrobe of baggy T-shirts, and he makes big promises. “The future where you have humanoids at home folding your laundry is a lot closer than you think, and the price will also be a lot lower than most people imagine,” he says in one video from last August, promising that 1X “can manufacture [robots] at the cost of a relatively affordable car.” In another interview from 2024, he speculates about a world in which 1X builds “thousands of NEOs in 2025, tens of thousands in 2026, hundreds of thousands in 2027, millions in 2028.”
To stoke belief in the promise of humanoid butlers, 1X has relied heavily on carefully orchestrated promotional content. In a YouTube video by the San Francisco filmmaker Jason Carman, 1X brings a NEO unit to Carman’s house “to do chores.” The video, titled “I Lived with a Humanoid Robot for 48 Hours,” has nearly half a million views and shows the NEO making coffee for Carman. This might be impressive, but it’s unclear whether the robot is operating autonomously or being controlled by an engineer. The only task the machine performs on camera is pouring boiling water over coffee grounds. A 1X employee then takes the brewer, pours the coffee into a mug, and hands it to Carman. “NEO made it,” he says. “I swear.”
Pointing out that the robot didn’t actually make the coffee but merely poured water from one container to another feels churlish, like shouting at a children’s magician: “It’s up his sleeve!” And as AI advances, it will become even more difficult to discern the truth of such demos. For example, many humanoids can now perform cartwheels or other impressive feats of acrobatics, but these may be one-off stunts rather than samples of a larger repertoire. Jim Fan, director of AI at Nvidia, has compared these performances to those of a “blind gymnast,” lacking awareness of the environment. If you were to introduce an obstacle in a backflipping robot’s path, it would crash right into it. Being able to copy one aspect of human mobility doesn’t mean we’ve mastered all of the underlying principles, too.
These demos also highlight another meaning of the “ChatGPT moment.” Just as many people ascribe human-level intelligence to ChatGPT because it can generate fluent speech, a capability we’d previously encountered only in conversations with conscious beings, we often imagine that humanoid robots are as physically capable as we are just because they possess similar bodies and can accomplish some of the same tasks. In both cases, though, the familiarity of appearances masks hidden limitations. “You see a robot making coffee, and people can make coffee, and the robot looks like a person, and you can easily extrapolate to all the other kitchenlike things the robot might be able to do,” Kuindersma tells me. “In reality, maybe the robot can literally only make this cup of coffee with this coffee maker.”
In a back room at Stanford University, engineers have mocked up a bare-bones grocery store, complete with shelves, baskets, and a scattering of unlabeled cans. It reminds me of my nieces’ kitchen play set, where the fine details of reality have been smoothed away so as not to confuse developing minds. I watch a wobbly figure stamping around the shelves and filling up a basket of goods. It looks more stable than my nieces do, but not by a whole lot.
The robot doing the shopping is Digit, the creation of Agility Robotics. The company is not trying to make an omnicapable robot butler, but something more immediately useful: a warehouse worker that can perform the basic but essential labor of moving objects from point A to point B. The design of Digit reflects Agility’s pragmatic approach. It’s a humanoid, yes, but with inhuman touches that suggest utilitarian preferences. Its head is a flat white mushroom rather than a skull; its “legs” are digitigrade, with backward-facing knees that make it easier to squat while standing flush against a shelf; and its “hands” can be either plastic nubs or vicelike clamps.
Melonee Wise, Agility’s chief product officer at the time, is similarly restrained. She’s quiet and calm, with a neat bob of hair and unfussy glasses. She’s been in the industry for nearly two decades, having worked at the famed robotics incubator Willow Garage before co-founding Fetch Robotics, a startup that made autonomous mobile robots, or AMRs: small robots resembling coffee tables on wheels. Before humanoids, AMRs were the Next Big Thing, and after the usual cycle of hype and consolidation, they’ve actually proven themselves to be useful. In 2022, Amazon had more than half a million AMRs in its warehouses, working parallel to (though not necessarily alongside) its overtaxed human employees. As a result of her work with AMRs, Wise avoids framing humanoids as some sort of industrial panacea for every economic ill, from labor shortages to manufacturing bottlenecks. Instead, she presents them as what they are: a piece of hardware with its own drawbacks and affordances, like any other tool. “One of the biggest lessons I learned about commercializing technology is the technology part is easy,” Wise says. The hard part? “Making the technology usable.”
As I talk to Wise, I get a better understanding of the complexities of automating even simple tasks. She tells me about visiting a customer who wanted to use robots in the production of ball bearings. “So we’re watching this gentleman do the task,” she says. “The [bearings] go through a grinder and come out the other side. You’re supposed to pick them up and put them into the container. Then all of a sudden, I watch the gentleman who’s operating the machine go to a drawer, grab a rag, and wipe down the entire surface of the machine.” Wise asked if this was necessary. The worker said yes, of course: the machine gets gummed up. And just like that, she tells me, the task became more challenging to automate. Agility’s robots could move the bearings without a problem, but it would be trickier to teach them when it was time to wipe down the machine, or how to do so without human assistance.
As with the deceptive home robot demos, there is a difference between the ideal version of a task and the messy reality of putting humanoids to work in commercial settings. It’s a telling fact that only three U.S. robotics firms have made such deployments—Apptronik, Figure AI, and Agility—and even then, these are only pilot programs. In March of last year, Apptronik said Mercedes-Benz was “exploring potential use cases” for its machines but offered no details about how many robots were being tested or for how long. That January, Figure AI announced a partnership with BMW. Brett Adcock, the CEO, claimed that the company had “a fleet of robots” in operation. But a spokesperson for BMW later told Fortune that only a single robot was working in its plants. (Adcock threatened to sue the magazine for its reporting and did not respond to my own requests for comment.) Even Agility, despite its more realistic ambitions, has made slow progress. Last June, the company announced an agreement with warehouse operator GXO Logistics, but it seems to involve just two Agility robots. To put these figures in context: the International Federation of Robotics reports that some 4.2 million industrial robots are in operation globally, with more than half a million new machines installed annually. The claims by some humanoid-robotics companies that they will exceed these figures in a matter of years with unproven technology clearly don’t add up.
At Stanford, I spend some time watching Digit at work, after which I take a virtual tour of Agility’s headquarters in Oregon. I watch the company’s robots lift crates, grasp packages, and move loads. Agility’s representatives show me the company’s software, which displays a diagram of a factory or shop, with icons representing robots and workstations. Customers can then set up a working routine by clicking these pieces, like in a video game. You click to select a robot, click to pick up an object, click there to drop it off, and so on. Here, I thought, were the basic components of an automated workforce, seemingly ready to go, and yet confined to limited demonstrations or experimental settings.
Wise acknowledges the challenges that remain—cost, reliability, integration—but remains confident that the industry is on the cusp of major change. She saw this happen with AMRs, she says, and notes that a “big unlock” is coming, when humanoid robots are allowed to operate freely around humans (right now they’re constrained to caged “work cells” because of safety fears). “We believe by the end of 2026 we will have humanoid robots free-roaming in facilities with trained adults,” she says, at which point they’ll be able to slot into the workplace alongside humans. I think of Digit navigating Agility’s fake grocery store, and yes, there is a sense of plausibility, even of inevitability. Of course there will be robots in warehouses in a few years’ time, but I wonder what impact they will actually have on working life. They could be a liberating force, yes, but also could just as easily be another prop in scenes of human drudgery, as unexceptional as forklifts or barcodes. It’s likely they’ll be useful and frustrating in equal measure—sharing the burden of human work without ever seeming to diminish it.
It can be hard to imagine meaningful changes to the world you know, but it’s easier in unfamiliar places. Perhaps this is why, in my interviews with robotics experts, they keep on talking about the transformation that robots will create in China. It’s there, I’m told, that humanoid labor will truly come into its own, a prediction that seems to scare people as much as it excites them.
China first surpassed the United States in industrial robot density in 2022, with 322 machines per ten thousand employees compared with 274 in America, and the gap has only widened since. Beijing’s Ministry of Industry and Information Technology has made robotics a key part of its industrial strategy, and aims to increase robot installations to 500 units per ten thousand workers by the end of the year. A review of documents by Reuters in May found that the government allocated more than $20 billion to the sector over the previous year, with state procurement of humanoids and related tech increasing by a factor of forty-five.
The pace of development feels frenetic. Videos of Chinese robots performing martial arts and acrobatics are common on social media, while the Chinese robotics industry has organized a series of spectacles to show off its advances. In April, the city of Hangzhou hosted the first-ever robot kickboxing tournament, featuring several humanoid robots made by Chinese startup Unitree outfitted with gloves and protective headgear, operated by humans with video-game controllers. (The robots were more Rock ’Em Sock ’Em than trained martial artists, but still put up a good fight.) Then, in August, Beijing hosted the World Humanoid Robot Games, with more than 280 teams participating in twenty-six different events, including soccer, boxing, and the long jump. Again, the bots were not always gainly—one viral video showed a humanoid veering off a running track and knocking over a human spectator—but the atmosphere was one of dynamism and experimentation. Here, robots were running free, if sometimes a little too literally.
Like other Chinese industries, the country’s robotics sector benefits from economies of scale and generous state investment. “China will just manufacture everything, and the state backs it. So the volumes that they can produce are kind of absurd,” says George Chowdhury, a robotics analyst at ABI Research. The proximity of supply chains is also vital. U.S. companies looking to iterate on a new design often have to ship components back and forth to manufacturers in Asia. Chinese firms, on the other hand, can simply head to the factory next door, tweak things in person, and have new prototypes ready to test in a matter of days—the same reason China has dominated in high-tech markets like solar power, electric vehicles, and drones. “Drones and EVs aren’t easy to build—or, they weren’t at the time,” Reyk Knuhtsen of the market-research firm SemiAnalysis tells me. But Chinese manufacturers have been able to solve technological challenges by “brute-forcing the problem, just producing over and over and over until you figure it out.” Knuhtsen claims the same will be true of humanoids.
The United States, however, still retains an edge in product quality and software. According to people who deal with the technology firsthand, it produces reliable machines that are more easily integrated into the workplace. Robert Stokes, who runs a robot distributor that sells Chinese bots to U.S. customers, says that “a good twenty percent” of the robots he buys from Unitree arrive broken, requiring repairs even before they’re up and running. The company’s marketing can also be misleading, he warns, with its robots unable to perform the stunts you see on social media without significant modification. “You’ve got a lot of development you have to do personally,” he says. “When a new robot is released, the software tools are initially rudimentary.” And even if you can make a robot dance, it doesn’t always mean you should. Stokes gives the example of Unitree’s G1 humanoid robot, which can be seen performing backflip after backflip in promo videos. In real life, he says, all these gymnastics quickly wear out the machine’s components.
Still, Stokes is bullish about Chinese robots. The quality is improving rapidly, he says, and demand is growing. At the start of this year, he sold fewer than ten humanoids a month, but predicts this figure will rise to a hundred units next year. He compares the market to that of smartphones. The American robotics firms are copying Apple’s playbook, focusing on designing a few high-quality products, but Chinese companies are mimicking the Android ecosystem, pushing out high volumes in a range of designs. “The Apple approach is fine,” says Stokes. “But there’s a certain part of the market that needs the lower-end option.”
Naturally, the same technological challenges facing humanoids in the United States apply in China as well, though these considerations are increasingly overshadowed by geopolitics. Chowdhury says that there is a growing cadre in government and industry who have bought into the hype around humanoids and believe that the world economy will be dominated by whoever cracks the problem first. “If China can print its own workforce, then you’re kind of in trouble economically, right?” he says. “It becomes almost an existential thing. It’s almost a cold war ramping up, I think, in the minds of some people.”
When I talk with Knuhtsen about a future in which China and the United States have cranked up production and humanoids work as well as humans do for $7,000 a pop, he’s hesitant about the scale of his own predictions. The most extreme scenario is that robots become a $65 trillion market and replace all human labor, he says. “Coming out of my mouth, I hate that. I don’t want to be the guy saying that. But effectively that’s what’s in store, should it go correctly.”
This would create a workforce that is not only cheap, but also pliant and obedient—an appealing notion for elites who find it challenging to deal with human workers. It’s no coincidence, for example, that Musk, one of robotics’ biggest boosters, frequently decries the presence of unions in his factories. If only these troublesome humans could be automated, it would free the holders of capital from their obligations to the working class. And if workers can no longer withhold their labor as leverage, then the provision of social goods like housing and health care becomes discretionary. This would be a seismic change in the world’s social dynamics, regardless of which country gets there first. Musk has said he believes “we’re headed to a radically different world” with humanoids, but the extent of the change—and who it will benefit—is far from clear.
As I talk with analysts about these possible outcomes, there’s something in our discussions that reminds me of Pascal’s wager. It’s a situation where the potential consequences are so extreme—in this case, a robot takeover of the global economy—that you’re compelled to take them seriously, despite their improbability. But frankly, it just doesn’t seem credible to me that robots will dominate the world’s economy within only a few decades. (Over a longer time frame, widespread adoption feels significantly more plausible.) Knuhtsen recognizes the unsettling scale of his forecasts. He describes his work, at times, as “looking into the abyss,” and it strikes me that there’s something hypnotic about contemplating change of this kind. It’s millenarian: a rapture for workers. You encounter these kinds of predictions from AI doomers, who warn about the threat posed to humanity by superintelligence. Often, they’re so entranced by the scale and drama of their claims that they lose sight of their implausibility. When playing God, it’s hard to stay grounded.
Back home in London after shaking hands with robot ambassadors, I’m still uncertain about the future of humanoids. Then, one of my contacts pings me about an upcoming conference in town: a meeting of various industry players talking through the fine details of putting robots to work. I dutifully sign up and head over to Hammersmith, arriving at one of those hotels that seem to exist outside local geography and function solely as gathering places for men and women in lanyards. As I dodge demonstrations of wheeled and legged robots, I hear chatter about motors and actuators, import taxes and safety standards, and the difficulty of getting these things to do what they’re supposed to.
Over the course of the conference, I dip in and out of talks from venture capitalists and startup execs. There are flashes of utopian speculation, but the event feels more grounded than my visits to Texas and California. Perhaps it’s simply the gloom that pervades all such industry get-togethers—the hotel carpet and the low-level anxiety. But there’s a reminder of history, too. In one talk, a researcher, Werner Kraus of Germany’s Fraunhofer Institute, notes that the industry has been here before, with so-called “collaborative robots”—small robotic arms designed to work safely alongside humans, and once thought to be the industry’s future. “We all have in mind how it went with the cobots twenty years ago,” he reminds the audience dourly, “when we said cobots would be the solutions for all our problems.” It didn’t work out that way. The promised cobot revolution never materialized, with high unit costs and hardware limitations confining them to relatively niche uses. At one point, in the smoking area, I chat with an entrepreneur from France who says he’s keen to start new robotics companies, but investors won’t commit. “It’s a mess,” he says. “VCs want to have a clearer understanding of the return. They don’t want to go full risk.”
Later, I watch a talk delivered by Aaron Prather, the director of robotics at ASTM International. ASTM is a global standards agency, one of those unseen but essential components of the global economy that publishes more than thirteen thousand technical standards on everything from the proper viscosity of industrial lubricants to methods for decontaminating radioactive materials. Some standards are voluntary and others become law, but all contribute to the invisible framework that keeps the global economy from jamming up (or accidentally making you radioactive). Of course, Prather says, robots will have to meet certain standards, too.
He runs through some of the hazards that will need to be managed. There are the pinch points. There’s the need for “e-stops” (emergency stop buttons) and adaptations for the blind and deaf. And then there are the psychosocial impacts, too: managing the feelings we project onto these humanlike objects. “When we see something that looks like us, we expect certain things,” Prather says. “And when those expectations aren’t met, disappointment kicks in.” In other words, there are always more problems than we anticipate. He gives the example of stability tests like the one I conducted on Apollo. “We’ve all seen these videos, right? We’ve actually done some tests in that area and found that robots—humanoids—are doing really good on those.” But, he goes on to explain, there are other ways to knock a robot over, and what humanoids can’t currently withstand is a gentle, slow push. “No one’s testing for that,” he says.
It’s a reminder of the unending complexity of a technology designed in our own image, though not necessarily a dispiriting one. Discussing these bureaucratic matters is itself a sign of maturity, and it’s clear from what I’ve seen that we’ve taken the next step in humanoid development. But from all my discussions, I’ve found that we can’t understand these machines by analogy alone. Just because ChatGPT became ubiquitous in a matter of months doesn’t mean that humanoids will follow a similar trajectory, and just because we’re replicating human anatomy doesn’t mean we’ve re-created human ability.
After Prather’s talk, I stay in my seat in the main auditorium. The schedule tells me that next up is a mysterious “theatrical encounter,” which I presume will be some sort of dramatic turn starring humans and robots, a metaphor for our harmonious future with the machines. The lights in the room dim, and the stage is illuminated, revealing a human dancer in a sequined bodysuit posed next to a humanoid no bigger than a child. As Europop blasts from the speakers, the dancer performs cartwheels and backflips in the general vicinity of the robot, which at one point drops to the floor to crank out some push-ups. The lanyards and I watch impassively, and I think of Musk’s spandex-clad slideshow. After a few long minutes, the performance wraps up, and the dancer runs out through the aisles, exiting through the doors at the back of the hall. Left alone in the spotlight, the robot walks to the edge of the stage, waiting for its handlers, unable to manage the stairs.