Academy Interview: Jim Crocker on Systems Engineering
September 30, 2010 Vol. 3, Issue 9
Veteran systems engineer Jim Crocker of Lockheed Martin talks about doing the right things versus doing things right.
James Crocker is widely regarded across the aerospace community as a leading practitioner of systems engineering. At the Lockheed Martin Space Systems Company, he is responsible for space science, planetary exploration, and remote sensing, including programs for the Spitzer and Hubble space telescopes; Defense Meteorological Satellites; International Space Station; Geostationary Operational Environmental Satellites; Mars Odyssey, Reconnaissance Orbiter, Scout; Phoenix; Juno; Jupiter Orbiter; and the GRAIL lunar mission. In the early 1990s, Crocker conceived the idea for the COSTAR system to correct the Hubble’s flawed optics.
As director of programs for the Center for Astrophysics at Johns Hopkins University, he led the system design effort for the Advance Camera for Surveys (ACS), a scientific instrument installed in the Hubble Space Telescope in February 2002 that improved the performance of the telescope by an order of magnitude.
As head of the programs office at the Space Telescope Science Institute, Crocker led the team that readied the science ground system for operation of the Hubble Space Telescope through orbital verification and science operations on orbit. Crocker previously designed electronics for scientific experiments on Skylab in support of NASA’s Marshall Space Flight Center.
He is the recipient of numerous honors including the Space Telescope Science Institute Outstanding Achievement Award and two NASA Public Service Medals for work on the Hubble Space Telescope.
Crocker spoke with ASK the Academy in August about how his career and his reflections on the discipline of system engineering.
ASK the Academy: Hubble has been intertwined throughout your career. What was your first involvement with it?
Jim Crocker: 1983 was the first time I was officially involved. The first time I got a glimpse of something related to it was actually down at Marshall. I was supporting the Marshall Space Flight Center in the mid-1970s, working on Skylab. We were getting to store some spare solar rays in a facility there, and there was this full-size model of this thing called the LST—the Large Space Telescope—and I thought, “Wow, that’s cool. I’d like to work on that.” Seven or eight years later, I was.
ATA: What was your job?
JC: I was at the Space Telescope Science Institute. AURA (the Association of Universities for Research in Astronomy) had won the science operations contract for Hubble. I was hired to help get the ground system ready, and ended up head of the program office there, getting a lot of the support systems for science operations, guide star systems, and other things ready to go on Hubble.
ATA: You started your career as an electrical engineer. How did you come to be a systems engineer?
JC: Much of my early career—and even today—focused on scientific instruments of one sort or another. When you think about it, Hubble is just one huge scientific instrument. A lot of my career has been focused on instruments, and when you get into building instruments, it drives you into systems engineering. Instruments are usually dominated by electrical engineering and optical engineering, which in most instances is kind of a sub-field of electrical engineering. It’s usually taught in the electrical engineering department. As a result of that, you have to know thermal and optics and computers and software and all those ancillary disciplines beyond electrical engineering. It drives you in the direction of systems engineering.
When I went to school, I don’t know that there were any formal systems engineering courses. You certainly couldn’t get a degree in it. Since electrical engineering had expanded to include hardware and software as major sub-disciplines as well as electro-optics, it was kind of a place that a lot of systems engineers of my generation came out of. I think particularly the exposure to instruments early in my career started pushing me in that direction—at least giving me the background that I needed to do systems engineering.
A lot of the best systems engineers I’ve seen seem to come out of instrument backgrounds. There’s another one (common background) too: a lot of them come off farms. I really think that when you’re on a farm, you work on mechanical things and electrical things. Maybe it gives you that “having to understand something about everything” mentality. That’s anecdotal, but I think a lot of people would concur with the (value of an) instrument background, because of the broad discipline scope that you have to have and the opportunity to do that. They’re usually small enough where you can get your arms around the whole thing. It makes a great training ground for systems engineers.
ATA: You’ve said that a systems engineer has to have broad knowledge. How did you broaden your own knowledge base over time?
JC: It goes back to instruments. I’d been driven in the instrument arena to learn about these other disciplines. Once you get to a certain level of proficiency, it allows you to go deeper into a subject. I know when I was at the Space Telescope Science Institute, for example, part of my responsibility was being the liaison between the scientists at the institute and the engineering teams around the country who were building the instruments for the telescope. While I was familiar with optics from a cursory point of view, doing instruments for the Hubble Space Telescope, where the optics were very complicated, very precise, and very large—that gave me the opportunity to broaden my understanding of optics, and it forced me to take some more formal coursework beyond the cursory coursework I’d done in college. That sparked a lot of my interest in larger optical systems, and because of that, I ended up going over to Europe with Riccardo Giacconi, who was the director of the Space Telescope Science Institute, when he went over to run the European Southern Observatory, where they were building four ground-based eight-meter telescopes.
It’s not just depth. You go broad, and then you go deep. And then you go broader in another area, and then you go deep. It’s very easy today just to continue going deeper and deeper in a narrower and narrower niche. To get out of that, you have to go broader, and then as you go broader you go deep, and then you find another area to go broad in again. It’s a combination of expanding your knowledge about things and then going fairly deep into them.
Systems engineering is not just knowing the theory behind something. The real trick as you mature in these areas—the “going deep” part—is understanding how things are fabricated, what the risk in fabrication is. In optics, for example, you learn all about optical coatings and all the idiosyncrasies about how these coatings perform, how they get damaged and are not quite up to spec, and what in the process causes that. As you learn these things, it allows you to design systems that have more resilience. When you don’t have at least an understanding of where the real challenges are, you’ll design something that can’t be built. You have to know enough to know what to stay away from, and what can and can’t be done.
ATA: It’s true of residential architects too.
JC: That’s exactly the point. I use a lot of analogies to residential architects because people understand architecture and can relate to the fact that you have an architect and a builder. Systems engineering has this architectural part and this building part. Peter Drucker said it’s more important to be doing the right thing than to be doing things right. Of course in our field, we have to do them both right, but Drucker’s point was that it doesn’t matter how well you do the wrong thing. A lot of my career in systems engineering has been focused on the architectural part—getting the thing conceived so that the end user gets what he or she expected.
ATA: What’s an example of making sure you’re asking the right question?
JC: I think we as a community are going through something right now that’s relevant to that question. It has to do with cost and affordability. When I went over to ESA (the European Space Agency) and did a program review with Riccardo (Giacconi) to understand where this multi-billion dollar ground-based telescope program was—this was to build four enormous telescopes that were optically phased together, something that had really never been done before—I came to understand something at the end of the review. I said (to the team), “Let me tell you what I heard you say. What you said is that you are building the most wonderful, phenomenal observatory in the history of man, better than anything else in history, regardless of how long it takes or how much it costs.” And they said, “Yeah, that’s exactly what we’re doing.” I said, “Well, we have a problem then, because there’s only so much time and so much money. On the time part, if we don’t get this telescope built here on this new schedule that we’re laying out, the Keck Telescope that the U.S. is building and others—the Gemini—their scientists will skim the cream. Theirs may not be as good as yours, but they will skim the cream. And there’s only so much money. So we have to build the best telescope that’s ever been built, but within the cost and schedule that circumstances are going to allow us to do, because getting there late is going to mean we’re not going to be the first to do the science.” That was a real paradigm change for everybody, and understanding that really led us to a place where we did come in within a few months of our schedule and right on our cost. It was a big paradigm change.
I think we’re going through something similar to that now. Certainly in NASA programs, and I think in DOD programs as well. We’ve had this emphasis on schedule and now on cost. The thing I worry about—and here it gets into “make sure you’re doing the right thing.” In the “Faster, Better, Cheaper” era, people got focused on cost and schedule, but they missed the fact that what was increasing was risk. There was not a clear communication about what the real problem was, and because of that lack of clarity about understanding the right problem, what actually happened was we pushed the cost performance to a point that was so low that the missions started to fail and we weren’t able to articulate what we were trying to solve. People thought we were just continually trying to do cheaper, cheaper, cheaper. Einstein said you should make things as simple as possible, but not simpler. My twist on that was that you should make things as cheap as possible, but not cheaper. Because of that, we got into mission failures across the industry as we pushed below a point while not clearly articulating what the problem was other than “Well, let’s make it cheaper.” It went off the cliff.
As we get into this next reincarnation of this cycle that we go through, we have to do a much better job of articulating the problem and knowing what it is we’re trying to achieve. What we’re trying to avoid here really is overruns—the unpredictability of a lot of our programs. We get into situations where not one but a large number of programs overrun. I’m not sure the desire is really to do it cheaper. It’s certainly not to do it cheaper than possible. It’s to do it predictably—both on cost and schedule—and still have mission success. At the end of the day, if you do it faster and cheaper and the mission fails, you’ve really wasted the money. So it’s important to make sure this time that we’ve really understood what we’re trying to accomplish and articulated it well, so that we can all be solving the right problem.
Remember the game “Telephone,” when you were a kid, where you whisper something in somebody’s ear? You go through ten or fifteen people and it comes out the other side, and you wonder, “Where did that happen?” What’s fun is to go along the way and get people to write down what they’ve heard. You go back and you see where these things get very slightly changed from person to person, and it’s totally different at the end.
In companies the size of ours (Lockheed Martin) and in agencies the size of NASA, when we try and communicate some of these really challenging goals, the ability to really crisply and clearly articulate the problem we’re trying to solve is enormously important. It’s how we go wrong and end up in a ditch.
ATA: Your point about predictability is interesting. What we’re trying to do is say we can reliably deliver on the cost and schedule that was promised at the baseline.
JC: That’s right. We get into this thing of “We have to do it cheaper,” and we’ve already started to miscommunicate.
In some instances, that’s not true. Today if you look at the launch vehicle situation and the retirement of the Delta II, if we continue doing business the way we’ve been doing business, right now there’s just not a Delta II class vehicle available, so you either have to go much smaller (Minotaur) or much larger (Atlas). So people say launch costs are unaffordable. That’s true, but it doesn’t necessarily mean you need a cheaper launch vehicle. It could mean you need to do more dual launches with a bigger launch vehicle. That has its own problems. Or maybe we can figure out how to do missions on smaller buses with smaller payloads and fly them on smaller vehicles. It’s just so important in systems engineering to understand and be able to communicate to everybody what the problem is that you’re trying to solve.
I think Dan Goldin’s “Faster, Better, Cheaper,” which everybody thinks was not successful, actually was successful. Dan said we’re going to do more missions, they’re going to cost less, we’re going to have more failures, but at the end of the day we’ll have done more with the money than otherwise. He said, “I think as many as three of ten could fail.” Three of ten failed. If you go back and look, we did the other missions for less money with that approach. Two things happened. One, I don’t think we had the buy-in of everyone involved, and we didn’t properly communicate expectations. Two, we got into this thing where we might not have had any failures if people had understood where to stop, and that had been clearly communicated.
That’s why I say it’s important as we articulate where we’re going this time that we understand is it “cheaper, cheaper, cheaper until we break,” or do we want predictability so we can plan to do things right with no surprises?
ATA: What are the signs that you might not be working on the right question?
JC: I don’t know who invented Management by Walking Around.
ATA: I’ve heard it was Hewlett and Packard.
JC: I’ve heard that too. I don’t know if it’s anecdotal or true. I think it was actually Packard who was the MBWA person. I certainly learned early in my career that as a systems engineer responsible for the architecture, getting around to the people who are flowing the requirements down to low-level systems and actually going as far down the path as you can and talking to people about what they’re doing and what their objectives are and having them explain them to you is really the proof in the pudding. There are two pieces to this. One is you have to do the right thing, and then it gets distorted because of the “Telephone” effect. That’s where going down and talking to people who are doing critical subsystem design—just talking them to make absolutely sure that you understand that they understand what the essence of this thing is all about. That’s number one.
The second one is really making sure at the front end that you understand and you can communicate and have somebody tell you back at the high level what they thought you heard. Then you really have to capture that in the requirements. I’ll use Faster Better Cheaper again as an example. Goldin said, “We’re going to do this,” but I don’t think he articulated it well enough to get it into requirements. It’s that first translation step into DOORS where you have to make sure that what got into DOORS, what got into the requirements database, really does the high-level thing that you want to accomplish.
There’s really a third component too. We have a tendency in our business not to understand who the real true end-user is. Certainly we don’t spend as much time as we often should really deeply understanding their needs operationally. This feedback of testing what you’re going to accomplish with the end user is critical. That’s a problem because you don’t speak the same language that they do. One of the things that we (Lockheed Martin) actually do here in our Denver operations is really interesting. We actually have people who rotate through all the life cycles of a project. They might start a program in the proposal phase, and then many of those people will end up in the implementation and the design phase (and go) all the way into the assembly, test, and launch. And then, since we fly missions as well, they’ll go in and fly the mission. That’s where you see the light bulb go on in somebody’s head when they say, “I’ll never do that again.” It really feeds back into the front of the design, and it makes people have a very rich understanding. A lot of times when we as systems engineers haven’t had the experience of actually operating some of the systems that we build, we just don’t know any better.
If you’ve ever changed the oil on a car, you sometimes ask yourself how the engineer could have been so stupid to put the oil filter where it is. It seems like it’s just impossible to get to sometimes without pulling the engine. (Laughs.) But then you go back as the engineer, and you realize he probably didn’t have visibility into the fact that a wheel strut was going to block access to the oil filter. So it’s only when you’ve been there trying to change the oil filter that you really understand that you need to know about more than just the engine to decide where to place the oil filter. That’s an important aspect of it too.
I think those three things, if you exercise them, can help you know that you’re not doing the wrong thing.