Ted Mills discusses knowledge sharing in NASA’s Office of the Chief Financial Officer.
Ted Mills is the Knowledge Services Lead for NASA’s Office of the Chief Financial Officer. Mills joined NASA in 2013 as an Operations Research Analyst in the Office of Evaluation, Cost Analysis Division. He is a retired Navy Captain and former Commanding Officer of an aircraft carrier-based aviation squadron and a major shore installation. He has had budgetary assessment experience at multiple levels in the Navy, where he provided analytical support to the Navy’s $120 billion budgetary plan, including the $35 billion Naval Aviation budget and procurement plan. Mills is a graduate of the U.S. Naval Academy and holds a master’s degree in operations research from the Naval Postgraduate School. He is also a graduate of the Naval War College and the National Defense University’s Joint Forces Staff College.
How does knowledge sharing help ensure that NASA’s cost estimates are continually improving?
The knowledge bases allow us to use estimating techniques that are critical to providing trustworthy estimates of the amount things are going to cost, and therefore, you can plan your future endeavors and keep them within your expected budget margin. For example, we are the maintainers of the policy of putting in the Joint Cost and Schedule Confidence Level, or JCL. If you assume you have a 70 percent confidence level based on the data and the simulation runs that you have, then you have a 70 percent chance of being on cost and schedule based on all the knowledge we have about your project. We use that a lot in planning. You can’t do those types of things without the knowledge databases that we maintain.
First of all, there’s the institutional knowledge of how to do JCL, and we have some of that here. So, how do we relay that? We just go out and talk to people and teach them to do it. It becomes routine for some people once they have experience with it. And then all the data, all the information, and all the past risks that are incorporated in JCL are in the Cost Analysis Data Requirement – which we call CADRe — dataset, so they’re pulled from there.
Data is the lifeblood of what we do. There are several different data services that we tap into from across the agency. The CADRe database is the biggest source of knowledge that we work with here. It’s accessible throughout the agency, and we put that into a database with every project. When a project reaches a milestone, we take a snapshot of where they are with cost, schedule and expenses, and where they are with respect to their projections, and it’s all recorded into the database.
The JCL tool has been around since 2009. We had a very high percentage of programs that were over cost or over schedule, and we were getting hit by GAO very often about this stuff. Since we’ve instituted these methods we’re very close to being on schedule and on time — all the time — for most of the major projects. There are some exceptions, of course, such as the James Webb Space Telescope. And big ticket items that are outside the norm for everything — such as the Space Launch System — I’m not counting in there. But most of the projects are doing very well. So, the utilization of the knowledge management effort that we have is paying benefits for the agency.
Do you have a favorite example of successful knowledge sharing?
The biggest success we’ve had related to knowledge sharing is CADRe. When we started instituting it in about 2005, there were very few data sets. Now, there are hundreds of data sets in there. There was no place to go in NASA where you could reference previous project data and historical knowledge for cost, schedule and basic performance of the projects. Everything was anecdotal or shared on someone’s hard drive or something like that. CADRe has it institutionalized now and anyone that has a good reason can go in there and find those things. I would call that just a huge success story.
What are some of the most prominent knowledge challenges in your organization?
How and where this information should be presented and who should be consuming it. Sometimes this stuff isn’t something people want to hear. We hear people very often say, ‘Well, I’m going to manage that issue. I don’t care if you have 20 different projects that have run into this obstacle before. We’re not going to run into it,’ or ‘It doesn’t reflect my project’ or something like that.
Frankly, the tendency for these things to go awry can be pretty high, and people don’t want to hear that. They want to know that their project is just going to go fine. But, based on previous data, there are risks out there that people don’t want to acknowledge. Now, that’s not to say people are being obstructionists. I don’t mean that. But it’s getting people to realize that there is a set of resources beyond what they have internally that could probably help them.
Are you observing any trends that affect knowledge management going forward?
I think attempts to manage large data sets is a trend everywhere. NASA isn’t trying to do things like Amazon where it takes thousands and thousands and thousands of pieces of data to try and find commercial trends. But there is a large data management trend and data science effort led by the CIO’s office that I think is really worthwhile, and open data is part of that.
What’s the biggest misunderstanding that people have about knowledge?
I would say it’s that knowledge has to be harbored to protect your project or program. And I just don’t think that’s the case. If there’s ever a situation where sharing data is a threat to a program because it exposes challenges and problems, then I think that not being open about it and not sharing data probably lets problems spiral down. Opening up an organization so that it can be examined means that problems get rooted out. The process of making decisions and policy is sort of like sausage-making. People don’t like it. But I don’t think people should be threatened by it. Knowledge sharing in a very open manner is only going to be beneficial because you bring the right expertise to the table.