Thursday, September 10, 2009

Post Mortem - Robot Rev 1

It's been a long time since I posted our progress on the self-balancing robot project. As you may have guessed, that's because it's been a long time since we worked on it. Our first attempt didn't go all that well, and we both got busy with other projects. Recently Alan and I have been wanting to jump back in and try again. This report will describe what we learned from our first attempt.

As you may recall, last time I reported that the robot was able to roll around and respond to commands from the remote control. We had two problems, however. The first was that one wheel started turning immediately after we switched on the power. It would continue to turn for a couple of seconds until the Arduino microprocessor finished booting. Our proposed fix worked well. We installed a couple of pull down resistors to hold the control line low until the computer began driving the signal low. (They were hard to add, though, because the finished breadboard wiring was very messy. That's another area we want to improve next time.)

There was still a question about how the motors were drawing any current when the USB was connected to the Arduino but the battery wasn't. After some discussions on the Arduino message board, it was decided that indeed there would be some voltage on the VIN pin if you applied power to the 5V pin. So that mystery was solved. With all of that, we now had a dandy little remote control car.

However, we were a long way from a self balancing robot. The second problem was more fundamental. The motors clearly didn't have enough speed and/or torque for the robot to do a "wheelie". If it couldn't raise itself up on two wheels to start with, there was no way it was going to balance on two wheels. We tried increasing the voltage by switching to 8 AAA batteries instead of 6 AAs. But it was nowhere near enough. On top of that, after we had made a number of attempts to fix the problem the motors began to turn even more slowly. One wheel, in particular, was turning very slowly. We checked the voltage across the motor leads and that wasn't the problem. Clearly the motor itself was failing. So the bottom line is we are going to need new, more powerful motors. The ripple effect of that change will result in several more design changes. For instance, more powerful motors will draw more current than the current H-bridge can handle. They also may not fit in the same location on the chassis. At that point we realized we were going to need a Rev 2 on the whole design.

It's not all bad news, however. Here are the things that went well:
  • Using the DVD remote control worked great!
  • Programming the Arduino was easy and fun.
  • The motor control circuit worked well once we added the extra pull down resistors.
  • The overall physical design worked well, including the idea of using a plastic project box as the chassis.
So we actually managed to build a remote controlled car from scratch. That in itself was fun and rewarding. To close, I finally have a few pictures to post so you can see what we built.

Here is the robot all assembled. Do you like the racing stripes? We were experimenting with AAs versus AAAs, so we had rubber bands holding the battery packs on:



This is a closer shot of the front end showing the roller that serves as the front wheel (there's one on each side) as well as the IR remote control receiver and an LED to indicate system status:



And here is a closer shot of the back end. It shows the power switch as well as the USB port used to program the Arduino. The small black button is a reset button. We never needed it because the software worked great:

Saturday, July 18, 2009

Remembering Apollo

I was raised on the space program. On my sixth birthday in 1962 my family moved from Fort Worth to Houston so my father could take a job at NASA. He was an aeronautical engineer at General Dynamics in Fort Worth when it was announced that NASA was building a new space center in Houston. He applied immediately, but it was some months before he landed a job there. At that time the Mercury program was in full swing, but Dad was hired to work on Apollo. President Kennedy had already committed the nation to go to the moon.

It took hundreds of thousands of people to make Apollo 11 possible. I am proud of my father’s role, but it is humbling to realize how many others were involved. He worked initially on the aerodynamic design of the launch escape system. That’s the small rocket that sits atop the capsule, ready to lift it quickly to safety if anything goes wrong with the booster. We still have a wind tunnel model which was used to test one of the early designs. Later he worked on the reentry aerodynamics. Coming back from the moon, the Apollo command module would reenter the Earth’s atmosphere at 25,000 mph, much faster than any previous spacecraft. This was an aerodynamic challenge of the first order. On top of that, Apollo was the first capsule designed to be a solid lifting body, so the astronauts could fly it down to the designated target area. With this ability the Apollo missions routinely landed within sight of the recovery ship.

I remember being a space junkie even before we left Fort Worth. At that time, every mission received full TV coverage from launch to splashdown. I was always glued to the tube. I watched the coverage of John Glenn’s first orbital flight, and I remember, even at five years old, the anxiety about whether his heat shield was loose during reentry. Later I followed every achievement of the Gemini program as NASA worked out the techniques that would be needed for Apollo: longer flights, larger crews, spacewalks, rendezvous and docking. Other boys collected baseball cards and memorized game stats. I became a walking encyclopedia of space trivia. I could tell you the height of the Saturn V rocket, the thrust of each stage, and every detail of the mission profile. I knew the names of most of the astronauts and could tell you which missions they had flown on. I had posters of rockets on my bedroom wall. A packet of publicity photos from NASA was one of my most treasured possessions.

The night of July 20, 1969 our whole family gathered around the TV to watch as the Eagle touched down on the moon. What an exciting time it was to be alive. Something like a quarter of the entire world population was watching with us at that same moment. The sense of wonder, pride and history was palpable. After the landing, it was to be several hours before the astronauts exited the vehicle. I remember we went outside and stood in the backyard, staring up at the moon. How strange to think that two men were there on the surface at that moment. I remember marveling at the thought that something could be in plain view, and yet so far away as to be invisible. It was hard to imagine just how far away they were.

When the moon walk started we were again glued to the television. We sat in the darkened living room of our home: my parents, my sister, my grandfather and me. I sat on the floor near the TV with my grandfather behind me on the couch. When the first ghostly images began to be transmitted we strained to make out what we were seeing. There was no doubt, though, about what was happening the moment Neil Armstrong stepped off the landing pad and onto the lunar surface. His words are burned in my memory. “I’m going to step off the LEM now. That’s one small step for a man, one giant leap for mankind.” The words seemed so appropriate.

However great the novelty and wonder of that moment was for me, I cannot fathom what it must have been like for my grandfather. Born in the Oklahoma Territory in 1892, he often told us stories of the first time he ever saw an automobile and the first time he heard about the Wright brothers at Kitty Hawk. In his lifetime mankind had gone from the first halting steps at heavier-than-air flying machines to massive rockets propelling three people to the moon, a quarter of a million miles away. As we watched the astronauts exploring the surface, every few minutes a title graphic would be displayed on the screen saying “Man on the Moon”. And every time it came on the screen my grandfather would read it out loud, in a tone of voice that spoke volumes. It was as if he couldn’t quite believe he wasn’t dreaming.

My grandfather died in 1979 and my father died in 1995. I have gazed up at the moon thousands of times in the past 40 years, and on none of those occasions was any human presence there to wonder at. Will there ever be again? Surely it will happen again someday, but my grandfather will not be here to see it, nor my father, nor perhaps will I. And now as I think back to that magical night 40 years ago it is as much with sadness as with wonder. The promise of that moment seems yet unfulfilled. I am still a space junkie. I still await eagerly each new development in the conquest of space, but I am chastened by the slow pace at which the future becomes the present. It is in this context that the accomplishments of my father’s generation seem even more extraordinary. Congratulations, Dad, to you and all your colleagues for a feat that only in hindsight, perhaps, we can fully appreciate.

Friday, March 13, 2009

The robot is rolling (literally)

Last weekend Alan finished assembly of the self-balancing robot, Rev 1. This first version isn't supposed to balance on two wheels yet, just roll around on three wheels. It works! Well, sort of. We have some debugging to do, and some design changes are in the offing. I still have no pictures to post, for which I apologize. Alan has the camera and I haven't had a chance to upload them.

It was pretty exciting when we first powered it up. Alan plugged the Arduino into the USB cable so we could download the software, and immediately one of the two wheels started turning! It didn't stop until the control program was finished downloading and booted up.

Now, mind you, I was pretty shocked because we hadn't connected the batteries yet, and the motors are supposed to be getting their drive current from the batteries. Somehow they were getting 5V power from the USB via the Arduino. I was concerned because the Arduino can only source 40mA from each pin and if we were drawing too much current it might be damaged. We discovered, though, that if we go ahead and unplug the USB and run from the batteries, the wheel still turns a couple of seconds, but then everything seems to work after it boots. We can use the IR remote control to drive it around on the floor. Pretty cool!

So now I have several questions to investigate:

1. How can the motor draw power from the USB through the Arduino?
2. Why is it only one motor that turns at power up?
3. How can I fix it?

Here's the circuit diagram (click for a larger version):

The H-bridge is actually an SN754410 although the diagram says L293E.

I think we can probably keep the wheels from turning at power on by adding pull down resisters to the enable lines on the H-bridge. I'm guessing the enable pin voltage is basically drifting until the Arduino drives it low in the setup routine. But I really don't understand how the power is getting to the motor. I hope to get some help from the Arduino user forum.

Aside from the mystery of the spinning wheel we have one major design issue. The motors I selected have plenty of torque, but not nearly enough speed, for the robot to lift itself to a vertical position. We are looking at ways we can address this short of simply buying different motors. The first thing we want to try is to get a little more speed out of the existing motors by upping the voltage. We intend to replace the six AA NiMH batteries with eight AAA NiMH batteries. This also makes the robot lighter. Eight cells will give us a nominal 9.6V, which after the 1.4V drop in the H-bridge becomes 8.2V. This is substantially higher than the net 5.8V we have now. The motors are supposedly rated at 12V, although I'm pretty sure they would overheat quickly if driven continuously at this voltage. But it will only take a fraction of a second for it to raise itself to vertical.

We'll try the pull down resisters this weekend, but we're waiting for parts to convert from AA to AAA batteries.

Saturday, March 7, 2009

Assembling the self-balancing robot

Well, Alan and I both got tired of waiting for me to finish the modeling and simulation I wanted to do, so I threw caution to the winds and ordered all the parts to build my draft design of the self-balancing robot. There is still the chance that the motors may not have sufficient speed or torque, or may have too much backlash to work well. Also, I went ahead and ordered the cheapest accelerometer available at Sparkfun, (the MMA7260) reasoning that we’d have a go at making it work first. We can always switch to a different sensor or add additional sensors if we have to.

So now Alan is spending this weekend assembling the robot. I have written Rev 1 of the software, and soon we will be putting them together for our first trial runs. For Rev 1, we are not going to try to balance. We have put a small caster on the front of the box so it can drive around as a typical three wheel vehicle. Once we debug the motor control and get a feel for the speed and torque we have available, I will be back trying to develop the balancing control software.

In the meantime I have been testing the accelerometer and, true to everyone’s comments, the measurements are alarmingly noisy. Of course the signal can be filtered, but that introduces delay in the measurements. Whether we can get this to work or not will depend, I think, in large measure on how rapidly the main control loop has to run to keep the robot well balanced. The MMA7260 has a refresh frequency on the X and Y axes of 350 Hz. So you get a new reading about every 3 milliseconds. If we can afford to make corrections to the motor inputs only every 30 ms, say, then we can take the average of the last 10 readings to smooth out the acceleration data. If this turns out to be too slow, the robot will be unsteady and we’ll see lots of random jittering back and forth. On the other hand, noisy acceleration data will cause jittering, too. So we must find the best tradeoff. But I note with some unease that there is still significant noise in the data even after averaging over 10 readings. And, of course, the more heavily the signal is filtered the more delay it creates before the control algorithm will see the beginning of an excursion. This can cause oscillation or even a loss of control. I’ll try to post some specific data at some point so you can see what I’m talking about.

Alan and I have had a number of discussions about whether we needed wheel encoders so we can get feedback on the distance traveled by the robot. We are going to build it first without encoders. This, too, is heresy among self-balancing robot builders (uh, I mean builders of self-balancing robots). Partly the standard wisdom arises from how people conceptualize the inverted pendulum problem. The natural way to think about it is that I measure how far the pendulum has departed from vertical, then I move the base that far to get it back under the center of gravity. But of course, for balancing, you get all the position feedback you need from the pendulum itself. However, the more fundamental reason people assume you need encoders is that the accelerometers can’t tell you whether you’re moving, only whether you’re accelerating. There is no way to correct the inevitable error you get from computing velocity by integrating acceleration. So when the robot is commanded to stand still it has no way to be sure it really is, and when it is commanded to move at a certain speed it has no way to determine that either.

My solution to this problem is partly electrical and partly anthropic, if you will. One piece of data we do have is the average current being supplied to the two motors in the forward and backward directions. To some approximation, you expect that if the robot is sitting still on a level surface the average power in each direction will be equal. Of course, there are natural physical irregularities so that is not exactly correct. But I believe we can add a bias term to that balance and treat it as a tuning parameter. We can adjust it to correct for (almost all of?) the drift. That’s the electrical part. The other part is based on the idea that this is not actually an autonomous vehicle. It is just a fancy remote control car. So the human in the loop will be controlling the position and speed to their satisfaction. Of course, this approach does have another downside: the robot will not be able to maintain position on a slope by itself. It will slowly roll downhill. But I think this will seem like a very natural behavior to the human operator, and of course they will compensate.

The only real question, which we’ll find out by building the thing, is how well the drift can be corrected with a constant bias, that is, how stable the bias is. Just in case, though, I bought some optical encoders and eventually I expect we’ll get around to fooling with them – if not on this vehicle, then on the next.

We’ll have another report, with pictures, when Alan finishes construction. I can’t wait!

Sunday, February 15, 2009

IR Receiver Circuit

The IR receiver from Mouser Electronics arrived this week. It came amazingly quickly. I ordered it late Saturday night. It shipped on Monday and was delivered on Tuesday. The chip we are using is the Vishay TSOP4840, which is only $1.10 quantity one at Mouser. Alan built the test circuit to connect the IR receiver to the Arduino microcontroller board using a breadboard:


Here's a closeup of the IR receiver and the remote control.


Here is the schematic for the test circuit:

The Sony remote control modulates the IR carrier wave at 40 kHz. (Other manufacturers use other frequencies. 38 kHz is common.) This carrier wave is turned on and off to create a stream of pulses that carry the data. The SIRC protocol uses pulse width modulation. A command begins with a start bit that is 2400 µs wide. It is followed by twelve data bits, separated by 600 µs gaps. A logical one bit is represented by a 1200 µs pulse and a logical zero is represented by a 600 µs pulse. The TSOP4840 demodulates the IR carrier wave and presents a logical signal on the output pin that is low when the IR carrier is present and high when it is absent (i.e., active low). The output is connected to a digital I/O pin on the Arduino. D2 is chosen because it is one of two pins that can generate hardware interrupts when the value changes. Thus it is only necessary for the software to time the intervals between the interrupts to decode the signal. I set to work on the software and we got the whole thing working without too much trouble.

It’s amazing how sensitive the receiver is. You can be across the room and point the remote at the opposite wall and it will still pick up the reflected signal. This will work great for the robot. You will have to be standing behind it, but the direction and distance are not critical.

The nice thing about using a standard IR remote control is how many different buttons it has. Once you’ve got the software in place to decode the commands you can define as many commands for the robot as you would like. We expect to have at least seven: stand up, lie down, go forward, go back, turn left, turn right and stop. One interesting issue is that the remote control repeats the command every 45 ms for as long as you hold down the button. It turns out to be essentially impossible to tap a button quickly enough to send only one command. Two or three is more typical. The robot control software will have to determine when a command was doubled or tripled through an auto-repeat and compensate.

Next up is to go back and finish the physics model and the simulation. I set it aside last week when I got stuck, but I’ve asked my brother-in-law, whose degree is in physics, to help me. So it’s back to school for me this week!

Sunday, February 8, 2009

Bubbling to the Surface

It is time to confess a new obsession. Like most obsessions it begins by contagion. I caught this one from my son Alan. We are going to build a robot. Well, not exactly. We want to build a remote-controlled self-balancing two-wheeled vehicle. Think Segway, only very small. And hyphenated. It's not an original idea, of course. Besides Segway, lots of amateurs have built such things. To the extent that we have a new twist on the concept, I am interested in seeing how cheaply we can do this. I think we might be able to do it for under $100. Most such projects seem to be at least $200-300. Cheap also means small (smaller motors cost less) so ours will be smaller than the others.

I am not a hardware guy. I'm like the punchline of the old programmer joke: How many programmers does it take to change a light bulb? None. "Hey, man, that's hardware!" Fortunately for our collaboration, Alan is much more of a hands-on kind of guy. In fact, I think we make a great team. I can't wait to take a crack at the control algorithm and he's itching to do all the soldering and wiring and assembly.

Here's how we got into this. For several months Alan has been surfing websites for DIY electronics projects. For Christmas he asked for an Arduino, an inexpensive microcontroller board based on an open source hardware design. Until Alan asked for this I never knew such things existed. Then I started doing a little investigation, and the obsession began. It is such a great time to get involved in DIY electronics. I had no idea there were so many sophisticated components available so cheaply, like three axis accelerometers in an IC chip that costs only $10 or $20 dollars. And the programming reminds me of the old days programming for my first home computer: an Apple II. Low level coding on an 8 bit micro and direct manipulation of the hardware. Wonderful! (That's geek nostalgia, friend.)

I see this effort as a sequence of sub-projects:

1. Do a preliminary hardware design
2. Develop a physics model for simulating the vehicle
3. Use the simulation to develop and test the control algorithm
4. Develop an IR remote control decoder to control the vehicle
5. Build rev 1 of the vehicle as a three-wheeled scooter
6. Debug and tune the balancing on two wheels

I've already roughed out the hardware design with an eye toward selecting and pricing components online. The only piece I haven't figured out yet is how to cheaply measure distance traveled. The obvious answer is an optical wheel encoder, which you can buy as a kit for DIY robotics. But if we're going to keep the hardware budget under $100, we'll probably need to do something cheap and homebrew.

Most folks who have built one of these things uses both a gyro and an accelerometer. The gyro gives a stable rate signal that you can integrate to get angular position, but it is subject to a lot of drift. The accelerometer gives a very noisy signal, but it can be filtered and used to correct the gyro drift. Again, to save money I'd like to try to make our vehicle work with just an accelerometer. I want to use the simulation to see how much noise I can tolerate, and get a sense for the bandwidth and resolution I need on the accelerometer. This past week I've been working on the physics model. Boy, my freshman physics is rusty! I suppose since it's been nearly 35 years that's not too surprising. I've been beating my head against it for days.

At the same time I've been reading up on IR remote controls. We want to use a Sony remote from our DVD player to control the vehicle. You can buy an IR receiver for $1 or $2 but I'll need to study up the SIRC protocol and program the Arduino to decode the signal. Last night I ordered the part and started looking into the programming.

Well, that's where we are so far. I have in mind to post sporadic progress reports here as we move forward. If we ever get it working I'll post a few video clips, too. Now that I've posted this entry the pressure is on to actually do something!

Saturday, January 17, 2009

Word of the Day - Snarge

Lately the airwaves have been filled with news about the US Airways pilot who made the miraculous emergency landing on the Hudson river after hitting a flock of geese. We've all been amazed by the stories and pictures. It's been educational, too. Many people didn't realize that a few birds can bring down a big jet. That angle of the story leads to today's word of the day:

Snarge - what remains of a bird after it strikes a plane.

Birds are actually a significant hazard in aviation and crashes of this sort occur regularly. So regularly that there is a lab at the Smithsonian Institution for identifying the bird species from whatever goo and feathers is left. This information helps experts understand how to improve the safety of airplanes and airports. And who heads the Feather Identification Lab at the Smithsonian? Her name is Carla Dove! Gotta love that.