PC/104-Plus: The brains behind the DARwIn humanoid robot

PC/104 and Small Form Factors — June 16, 2008

1Powered by a PC/104-Plus board and National Instruments' LabVIEW, Virginia Tech's DARwIn is making huge strides in robotics – literally, with a bipedal soccer-playing machine. Karl and Dennis explore the dynamics of bipedal motion and describe the electromechanical platform used to control these robots' movements.

The Dynamic Anthropomorphic Robot with Intelligence (DARwIn) series is a family of humanoid robots capable of bipedal walking and performing human-like motions. Developed at the Robotics & Mechanisms Laboratory (RoMeLa) at Virginia Tech, DARwIn is a research platform for studying robot locomotion that served as the base platform for Virginia Tech’s first entry to the humanoid division of RoboCup 2007, an international autonomous robot soccer competition[1].

The 600 mm tall, 4 kg robot (the latest version of DARwIn) has 21 degrees of freedom with each joint actuated by a coreless DC motor via distributed control with controllable compliance. Using a computer vision system on the head, Inertial Measurement Unit (IMU) in the torso, and multiple force sensors on the foot, DARwIn can implement human-like gaits while navigating obstacles and will eventually be able to traverse uneven terrain while implementing complex behaviors such as playing soccer.

Static versus dynamic gaits

With a few exceptions, such as Honda ASIMO, Sony QRIO, and KAIST HUBO, most legged robots today walk using what is called the static stability criterion. The static stability criterion is an approach to prevent the robot from falling down by keeping its center of mass over the support polygon by adjusting the position of its links and pose of its body very slowly to minimize dynamic effects[2]. Thus, at any given instant in the walk, the robot could "pause" and not fall over.

Static stability walking is generally energy inefficient because the robot must constantly adjust its pose in such a way as to keep its center of mass over its support polygon, which generally requires large torques at the joint actuators, similar to a human standing still with one foot off the ground and the knee of the other leg bent.

Humans naturally walk dynamically with their center of mass almost always outside the support polygon. Thus, human walking can be considered as a cycle of continuously falling and catching the fall: an exchange of potential energy and kinetic energy of the system, like the motion of a pendulum. We fall forward and catch ourselves with our swinging foot while continuing to walk forward. This falling motion allows our center of mass to continually move forward, not expending energy to stop the momentum. The lowered potential energy from this forward motion is then increased again by the lifting motion of the supporting leg.

Dynamic stability is commonly measured using the Zero Moment Point (ZMP), which is defined as "the point where the influence of all forces acting on the mechanism can be replaced by one single force" without a moment term[3]. If this point remains in the support polygon, then the robot can apply some force or torque to the ground, which in turn means the robot can have some control over its motion (the system). Once the ZMP moves to the edge of the foot, the robot is unstable and cannot recover without extending the support polygon (planting another foot or arm).

Kinematically correct

Parameterized gaits can be optimized using the ZMP as a stability criterion. Stable hyperbolic gaits can be generated by solving the ZMP equation for a path of the center of mass. Additionally, the ZMP can be measured directly or estimated during walking to give the robot feedback to correct and control walking. DARwIn is developed and being used for research on such dynamic gaits and control strategies for stability[2,4].

21
Figure 1

DARwIn’s primary joints are kinematically equivalent to human joints. Humans have ball and socket joints at the shoulders and hips, allowing three axes of rotation about a single point. Though DARwIn does not have a ball and socket joint, it achieves the same kinematics with three motors’ axes of rotation intersecting at a single point, making it equivalent to a ball and socket joint (Figure 1). Not only does this make the kinematic configuration closer to a human’s, but it also simplifies the mathematics involved in controlling and creating the robot’s motion.

DARwIn has 21 degrees of freedom (6 in each leg, 4 in each arm, 1 in the waist), 4 force sensors on each foot, a 3-axis rate gyro, a 3-axis accelerometer, and space to house a computer and batteries for powering the Robotis’ Dynamixel DX-117 motors, Flexiforce sensors, and computing equipment. The motors operate on a serial RS-485 network, allowing the motors to be daisy-chained together. Each motor has its own built-in potentiometer and position feedback controller, creating distributed control.

DARwIn II kicks

In addition to the mechanical design improvements over DARwIn I, DARwIn II has added intelligence that allows it to perform higher-level tasks, like playing soccer autonomously. DARwIn II’s electronics provide power management, a computing architecture, and a sensing scheme to gather information on salient environmental features.

Two 8.2 V (nominal) lithium polymer batteries power DARwIn. The batteries are usually attached to the lower body (legs or feet) to keep the robot’s center of gravity below its waist. These batteries provide 2.1 Ah, which gives DARwIn a little more than 15 minutes of runtime. The power circuit provides 3.3 V, 5 V, and 12 V for the various digital electronics within DARwIn. However, the joint actuators are run directly off battery power, which drops from 16.4 V to 14.8 V during runtime. In addition to providing power to DARwIn’s main systems, the power electronics allow for an external power connection and a seamless switch between power sources. Additionally, this circuit prevents reverse polarity, overvoltage, overcurrent, and undervoltage conditions from damaging the computing, sensing, and actuation components.

DARwIn’s computing architecture is set up to use a centralized control scheme, which is run on an Arbor Em104P-i7013 PC/104-Plus computer (Figure 2) with a 1.4 GHz Pentium M processor, 1 GB of RAM, CompactFlash drive for storage, IEEE 1394 card, serial communication, USB, Ethernet, and IEEE 802.11 for wireless communication.

22
Figure 2

DARwIn also has two IEEE 1394 (FireWire) cameras and a 6-axis rate gyro/accelerometer (IMU) for vision and localization. The cameras capture 15 frames per second (fps) at 640 x 480 resolution and 30 fps at 320 x 240 resolution RGB. The cameras are attached to a pan and tilt unit, which allows the robot to look at its surroundings. Two lithium polymer batteries in the feet allow the robot to be powered autonomously.

Software for reactive-based control

For higher-level behaviors such as playing autonomous soccer, DARwIn uses a reactive behavior-based control architecture programmed using LabVIEW Real-Time. Reactive-based control has the advantage of being simple and robust. Figure 3 shows the flow diagram of the entire control algorithm used for RoboCup 2007. The sensor data is processed to meaningful information, which gives the robot ball position, goal position, opponent positions, and orientation. The behavior modules use this information to dictate their respective actions. The motion control module uses orientation information to correct and stabilize the bipedal walking gait.

Each behavior module’s result is sent to the integrator, which decides the most appropriate behavior to implement in a given situation. For example, three behaviors may be "kick the ball," "reposition," and "avoid obstacles." If there are no obstacles (or opponent robots) nearby, the integrator will more likely choose to reposition the robot for a better kick. However, if an opponent is nearby, the integrator will "skip" repositioning and move straight to kicking the ball.

Once the integrator decides what the robot should do, the result is sent to the motion generator as a team message for other teammates to read in order to coordinate team play. The motion generator creates the necessary motion for the motion control based on the integrator’s result.

Next up: DARwIn III

DARwIn III looks to further improve the successful designs of the previous versions. Because the robot needs finer control of its walking gaits and increased processing power for a robust vision system, an ARM9 microcontroller will be introduced in DARwIn III’s design to handle all aspects of gait generation, leaving the PC/104-Plus computer to run the behavior and vision routines.

Gait commands to the microcontroller will tell the robot to move in a specific way (direction, speed, gait type, pose, and so on). The PC/104-Plus board and the microcontroller communicate with one another over an RS-232 network, with the microcontroller communicating over an RS-485 network with the Robotis Dynamixel motors.

DARwIn III also will use a world model to dictate its behavior. A world model is a completely known virtual model of the environment with the states of the model updated from sensor inputs. A world model allows for planning, which reactive behavior does not, and leads to more efficient behaviors.

To meet the modeling demand, the PC/104-Plus board will be upgraded to a Core 2 Duo-based board running at approximately 2 GHz, allowing RoMeLa team members to finish developing their vision - behavior and walking gate algorithms on a computing platform running LabVIEW Real-Time.

Going farther with FPGAs

The final implementation of DARwIn’s electronics package calls for a large reduction in weight, power consumption, and size while increasing performance. Several improvements are planned.

First, the PC/104-Plus Core 2 Duo will be replaced by the old PC/104-Plus 1.4 GHz Pentium M to save battery power. To boost performance, a new set of FPGAs will be added for each system, such as behavior and vision. This will allow multiple systems, such as walking, vision, and behaviors to be more complex and run simultaneously on their own processors without impinging on each other’s operation. More importantly, DARwIn’s reaction time to an ever-changing environment will decrease as a result of the parallel architecture.

In addition, the specific I/O required by each system will be on the FPGAs, eliminating the need to add I/O boards present in DARwIn III’s larger computing package. The walking algorithms running on a microcontroller could then be instantiated on an FPGA and control custom joint actuators instead of the Robotis Dynamixel motors. Alternate joint actuators will be used because the controller within the motors is Robotis’ intellectual property, and the ability to design the motors’ controller is becoming a necessity.

Finally, all systems will be connected to deterministic buses so the delay caused by information transfer is known. The current setup in DARwIn III does not use feedback from the Dynamixel motors because the proprietary code shares information in a delayed fashion on a nondeterministic, polling architecture bus. Using the team’s own joint actuators can subvert many of these problems and allow a deterministic bus such as EtherCAT to be implemented. Without such a bus, large latencies and indeterminism will make it very difficult to implement active real-time controllers.

Karl Muecke is a PhD candidate at Virginia Tech in Blacksburg, Virginia, and the lead engineer on the DARwIn project.

Dennis Hong is Assistant Professor of Mechanical Engineering and director of RoMeLa at Virginia Tech. He holds a PhD and MS from Purdue University and a BS from the University of Wisconsin-Madison.

Virginia Tech RoMeLa
540-231-7195
kmuecke@vt.edu
dhong@vt.edu
www.me.vt.edu/romela/

References:

  1. Hong, D. W., "Biologically Inspired Locomotion Strategies: Novel Ground Mobile Robots at RoMeLa," The 3rd International Conference on Ubiquitous Robots and Ambient Intelligence (URAI 2006), Seoul, S. Korea, October 15-17, 2006.
  2. J. Kim, "On the Stable Dynamic Walking of Biped Humanoid Robots," Korea Advanced Institute of Science and Technology, Daejeon, South Korea, 2006.
  3. Vukobratovic, Miomir, "Zero-moment Point - Thirty Five Years of its Life," Int. Journal of Humanoid Robotics, Vol. 1, No. 1, 2004.
  4. Q. Huang, K. Yokoi, S. Kajita, et al, "Planning Walking Patterns for a Biped Robot," IEEE Transactions on Robotics and Automation, Vol. 17, No. 3, June 2001, pp. 280-289.

It shoots, it scores

Check out a DARwIn robot in action at: www.me.vt.edu/Robocup/Site/Media.html