Cornell University researchers have installed electronic ‘brains’ on solar-powered robots that are 100 to 250 micrometers in size smaller than an ant’s head so that they can walk autonomously.
The development could set the stage for a new generation of microscopic devices that can track bacteria, sniff out chemicals, destroy pollutants, conduct microsurgery and scrub the plaque out of arteries.
Although microscopic machines have already been designed to crawl, swim, walk and fold themselves up, there were always ‘strings’ attached; to generate motion, wires were used to provide electrical current or laser beams had to be focused directly onto specific locations on the robots.
“Before, we literally had to manipulate these ‘strings’ in order to get any kind of response from the robot,” said Itai Cohen, professor of physics. “But now that we have these brains on board, it’s like taking the strings off the marionette. It’s like when Pinocchio gains consciousness.”
The ‘brain’ in the new robots is a complementary metal-oxide-semiconductor (CMOS) clock circuit that contains a thousand transistors, plus an array of diodes, resistors and capacitors.
The integrated CMOS circuit generates a signal that produces a series of phase-shifted square wave frequencies that in turn set the gait of the robot. The robot legs are platinum-based actuators, and both the circuit and the legs are powered by solar energy.
The new robots are approximately 10,000 times smaller than macro scale robots that feature onboard CMOS electronics, and they can walk at speeds faster than 10 micrometers per second.
“Eventually, the ability to communicate a command will allow us to give the robot instructions, and the internal brain will figure out how to carry them out,” Cohen said. “Then we’re having a conversation with the robot. The robot might tell us something about its environment, and then we might react by telling it, ‘OK, go over there and try to suss out what’s happening.’”
By customizing foundry-built electronics, the researchers have been able to build a platform that can enable other researchers to outfit microscopic robots with their own apps – from chemical detectors to photovoltaic ‘eyes’ that help robots navigate by sensing changes in light.
“What this lets you imagine is really complex, highly functional microscopic robots that have a high degree of programmability, integrated with not only actuators but also sensors,” said postdoctoral researcher Michael Reynolds.
“We’re excited about the applications in medicine – something that could move around in tissue and identify good cells and kill bad cells – and in environmental remediation, like if you had a robot that knew how to break down pollutants or sense a dangerous chemical and get rid of it.”
Although the team admits the electronics of the project are quite “basic”, the challenge comes when designing them to operate at low power, to avoid needing huge solar panels to move the robots.
The finished circuits arrived on 8-inch (20cm) silicon-on-insulator wafers. At 15 microns tall, each robot brain – essentially also the robot’s body – was a “mountain” compared to the electronics that normally fit on a flat wafer, Reynolds said.
“One of the key parts that enables this is that we’re using micro scale actuators that can be controlled by low voltages and currents,” said Alejandro Cortese, who is CEO of OWiC Technologies, a company he co-founded to commercialism optical wireless integrated circuits for micro sensors. “This is really the first time that we showed that yes, you can integrate that directly into a CMOS process and have all of those legs be directly controlled by effectively one circuit.”
The team created three robots to demonstrate the CMOS integration: a two-legged Purcell bot, named in tribute to physicist Edward Purcell, who proposed a similarly simple model to explain the swimming motions of microorganisms; a more complicated six-legged antbot, which walks with an alternating tripod gait, like that of an insect; and a four-legged dogbot that can vary the speed with which it walks thanks to a modified circuit that receives commands via laser pulse.
“The real fun part is, just like we never really knew what the iPhone was going to be about until we sent it out into the world, what we’re hoping is that now that we’ve shown the recipe for linking CMOS electronics to robotic actuating limbs, we can unleash this and have people design low-power microchips that can do all sorts of things,” Cohen said. “That’s the idea of sending it out into the ether and letting people’s imaginations run wild.”
0 Comments