Launch Systems Design

Constellation Ground Systems Launch Availability Analysis: Enhancing Highly Reliable Launch Systems Design

Success of the Constellation Program's lunar architecture requires successfully launching two vehicles, Ares I/Orion and Ares V/Altair, in a very limited time period. The reliability and maintainability of flight vehicles and ground systems must deliver a high probability of successfully launching the second vehicle in order to avoid wasting the on-orbit asset launched by the first vehicle. The Ground Operations Project determined which ground subsystems had the potential to affect the probability of the second launch and allocated quantitative availability requirements to these subsystems. Ground Systems also developed a methodology to conduct evaluate subsystem reliability, availability and maintainability to ensure that ground subsystems with allocated launch availability requirements could meet or exceed their requirements. The verification analysis developed quantitative estimates of actual subsystem availability based on design documentation, testing results, actual performance history (for legacy subsystems that will support Constellation) and other information. The results the verification analysis are used to verify compliance with requirements or to highlight design or performance shortcomings for further decision-making. This case study will discuss the subsystem requirements allocation process and describe the ground systems methodology for completing quantitative reliability, maintainability and availability analysis of new design, legacy and hybrid (legacy with new design) ground subsystems.


(retain this section only if applicable, and delete this parenthetical statement)

λ = failure rate

A = amplitude of oscillation

a = cylinder diameter

Cp = pressure coefficient

Cx = force coefficient in the x direction

Cy = force coefficient in the y direction

I. Introduction

THE viability of the two launch solution selected for the Constellation Lunar Architecture (Ares I/Orion and Ares V/Altair) is highly dependent on the reliability and maintainability of ground systems and the flight vehicles, particularly after the first vehicle has launched. Due to limitations in how long the first vehicle can loiter in orbit and successfully achieve the mission, the second vehicle must deliver a very high probability of successfully launching in sufficient time to avoid wasting the first-launched on-orbit spacecraft. Accordingly, the Constellation Program developed a “probability of launch” requirement that bounded the acceptable risk of mission failure due to a second vehicle launch failure at less than one percent. This requirement stated “The Constellation Architecture shall have a probability of crewed lunar mission launch of not less than 99 percent during the period beginning with the launch of the first vehicle and ending at the expiration of the last launch opportunity to achieve the targeted Trans-Lunar Injection window.” This overarching requirement was decomposed into two “child” requirements that flowed to the launch vehicle, the space craft, ground systems, and mission system.

1) The first “child” requirement stated that the vehicle, spacecraft, or ground systems shall have a probability of launch of not less than some percentage between 99 percent and 94 percent beginning with the decision to load cryogenic propellants and ending with the close of the day-of-launch window for the initial planned attempt. This “critical time period” was originally estimated at about fourteen hours then later revised to ten hours.[5]

2) The second “child” requirement stated that in the event of a failure, the vehicle, spacecraft, or ground system must deliver a probability or repair of between 30 percent and 45 and readiness for launch within an acceptable time period.

At first consideration, the child requirements would seem inconsistent with the parent requirement for the architecture to deliver not less than a 99 percent chance of success. For example, if the vehicle and the spacecraft each delivered a 98 percent probability of success and ground systems delivered a 99 percent probability of success, the architecture would deliver only a 95 percent probability of success. This would be true, but only for the first launch attempt. The second child requirement which defined the maintainability standards enables a likelihood of a second launch attempt. In the event of a launch failure, the combined likelihood of a successful repair and at least one additional launch attempt enables the architecture to satisfy the overarching requirements to deliver a probability of successful launch within the acceptable time period of not less than 99 percent.

The GOP probability of second launch and maintainability assessment effort was divided into three overlapping phases:

1) Phase I consisted of determining which ground subsystems, or portions of subsystems, affected the probability of second vehicle launch. The litmus test was which subsystems could cause a launch hold or scrub during the “critical time period” and allocating quantitative availability requirements to these subsystems.

2) During Phase II, this phase the RMA team develops a subsystem model to produce quantitative estimates of subsystem availability. The model uses a functional reliability block diagram tool from the Relex software suite and is based on design documentation (drawings, parts lists, operational concepts, etc.), testing results, actual performance history (for legacy subsystems that will support Constellation) and other information. The results of Phase II analysis are used to verify compliance with the allocated subsystem launch probability requirement and to highlight opportunities for design improvement.

3) Phase III, completed concurrently with or after the Phase II analysis, leverages the functionality of the subsystem reliability block diagram to define all possible failure paths and their likelihood. A “cut set” listing is developed and ranked by unreliability. The result provides a clear prediction of the most likely sources of unreliability, based on the subsystem model and the associated data.

This paper describes how the Constellation Ground Operations Project (GOP) applied quantitative Reliability, Maintainability and Availability (RMA) theory, tools and techniques to allocate launch probability requirements and to assess compliance with those launch probability requirements at a ground subsystem level. Additionally, we will describe how the launch probability assessment was leveraged and translated into assessing ground subsystems maintainability, evaluating compliance with the second child (maintainability) requirement, and focusing efforts on logistics support and operations planning.

It should be noted that, due to the sensitivity of the detailed analysis products, only fictional and notional descriptive examples are provided for illustration.
II. Ground Systems Requirements and the Initial Allocation Process

Constellation ground systems received a 99 percent probability of launch requirement allocation. Historically, throughout the Space Shuttle Program, ground systems delivered an (approximately) 86 percent probability of successful launch support[1]. The Constellation architecture would require significant improvements in the historical launch probability. In response, the Constellation GOP developed an approach to allocate requirements to an appropriate level of fidelity that met or exceeded the 99 percent probability of launch requirement.

Early in the process, the analysis team came to the conclusion that availability requirements should flow-down directly to the subsystem level. This aligned the launch availability analysis with the design team structure and design review process.

The initial (Phase I) analysis of the second launch availability requirement consisted of determining which Ground Systems elements, subsystems, or portions of subsystems could affect the probability of second launch, based on which subsystems could cause a launch hold or scrub during the critical time period and allocating quantitative availability requirements to these subsystems. Subsystems that met the criteria were carried forward and included in the analysis. Subsystems that did not (such as the crawlerway, and Vehicle Assembly Building (VAB) access platforms) were excluded. Subsequently, quantitative availability requirements were developed and allocated to each included subsystem to support Ground Systems' 99 percent probability of launch requirement.

The objectives of the Phase I launch availability effort were:

1) To understand the operational concepts and process flows associated with second launch preparation.

2) To determine which Ground elements and subsystems, or portions of subsystems could impact the probability of crewed launch, specifically causing a launch hold or scrub during the critical time period.

3) To allocate quantitative availability requirements to specific subsystems in order to meet or exceed the overall Ground System second launch availability goal of 99%.

4) To document the initial availability subsystem allocations in the Ground Systems - Systems Requirements Document (GS-SRD) and GS Element Requirements Documents (ERDs).

5) Intent is to help focus design priorities to meet overall requirements

6) Support design review milestones

7) Highlight differences between allocated availability requirements and best estimates of actual performance

8) Risk – differences that cannot be resolved by other means may drive design changes (and cost)

1 (below) describes the overall approach to addressing the probability of launch requirement. Beginning at the project start, the top line of 1 depicts the initial orientation process of documentation review and evaluating the characteristics of each subsystem. With all ground subsystems captured in a matrix, the team began to analyze which Ground subsystems, or portions of subsystems, could impact the probability of crewed launch, specifically causing a launch hold or scrub during the critical time period. Subsystems that did not meet the criteria were excluded from the analysis. Subsystems that met the criteria were subjected to further analysis to determine the following:

1) If the subsystem was repairable within the launch phase constraints, such as a pad clear condition.

2) If the subsystem was relatively “high” or “low” availability.

“High” availability subsystems would be required to deliver not less than a 99.99 percent probability of successful operation through the critical time period. “Low” availability subsystems would be required to deliver not less than a 99.90 percent probability of successful operation through the critical time period. Factors indicating that a subsystem should be designated as a high availability subsystem included subsystem criticality, redundancy, repairability, and/or highly reliable performance demonstrated by a legacy subsystem. Factors indicating a low availability designation were non-repairable subsystems, low historical performance, low redundancy, and/or design risk. Subsequently, a third category (“very high”) was added for subsystems that, due to their construction, were so monumental that a failure was unlikely in the extreme. These subsystems were assigned a requirement of 99.999 percent probability of successful operation through the critical time period.

The RMA team developed an initial matrix that captured the 80 subsystems listed in the Master Subsystems List, the OPR, the associated element(s), whether the system was included or excluded from the analysis and why, whether the system was repairable, and an initial “high”, “low” or “very high” initial availability assessment for “included” subsystems. This matrix was continuously refined with input and support from the various technical organizations, the Space Shuttle Launch Team, Ground Systems, and Safety and Mission Assurance staffs. Support from each of these organizations was superb. Each stakeholder organization contributed significantly to the quality and clarity of the final allocation.

In this process, adjustments were made, assumptions were challenged, and a refined listing was developed and finalized for requirements flowdown to the subsystem level. The final availability allocation by subsystem were tabulated and flowed to the subsystem level as requirements.

The first question for a subsystem that could cause a hold or scrub was, “Is the subsystem repairable within the launch phase constraints?” This was a particularly important part of the assessment. Many subsystems were wholly or partially included in the launch clear zone. After the initiation of cryogenic loading operations, access to the launch clear area becomes extremely limited. If a repair is required, the launch is generally scrubbed, propellants are drained from the vehicle, and access restored after confirming a safe work environment. Subsystems within this zone would be analyzed for subsystem reliability during the critical period since repairs could not contribute to subsystem availability. Subsystems with components that resided outside the launch clear area were allowed credit for repairs during the countdown in the event of a failure if the repair could reasonably support the countdown time limitations.

The results of the initial allocation were loaded into a Relex Reliability Block Diagram for analysis. Of the 80 subsystems in the Ground Elements Master Subsystem List:<<FINALIZE WITH FINAL CONFIG – ASSUME GSCB APPROVAL AND REDO FOR 10 HOURS>>

* 25 subsystems were excluded as they were evaluated as having no impact on launch availability within the critical timeframe.

* 2 subsystems were evaluated as “low” availability (Weather Instrumentation and the Launch Control System).

* 46 (48) subsystems were evaluated as “high” availability.

* 5 subsystems were evaluated as “very high” availability due to the extremely low probability of structural failure within the critical time frame (Facility Grounding and Lightning Protection, Launch Mount, Lightning Protection System, Mobile Launcher Structure [Base and Tower], and Safe Haven [Structure])

* 1 subsystem was evaluated as “TBD” awaiting further requirements for the subsystem (Ground Cooling).

* 1 Subsystem was consolidated (Weather Meteorological into Weather Instrumentation).

Overall, 53 subsystems were included in the Phase I output Relex RBD as depicted in 2 below. A simple reliability calculation was used to assess the overall reliability of the 53 subsystems over the 14 hour critical time period, approved for use at that time. In reliability terms, when a component fails, it is not repairable. The term Mean Time to Failure (MTTF) is normally used in this context. The Relex RBD tool uses the term Mean Time Between Failure (MTBF), which is normally associated with availability calculations, for both reliability (non-repairable) and availability (repairability included) calculations. Within the tool, a check box was used in this case to designate all subsystems as non-repairable. For this calculation, all subsystems were designated as non-repairable.

The MTBF values in the RBD boxes represent MTTF and correlate to the allocated reliability values above the box. This was accomplished by calculating a constant failure rate distribution for a 14 hour period <<REDO FOR 10 HOURS>>to achieve the desired reliability using the calculation technique below:

The calculation results in Table 2 below shows that the allocations produce an overall Ground Systems reliability of 99.34% over the critical time period (10 hours), exceeding the overall Ground Systems 99% requirement. The conclusion is that if each subsystem meets or exceeds its allocated availability target, overall Ground Systems will meet or exceed the second launch availability requirement.

TIME (Hours)




































Table XX. Ground Systems Overall Reliability at 10 Hours Based on Subsystem Allocations

The results above were highly favorable for the following reasons:

1) The order of magnitude, differences between the “low”, “high”, and “very high” allocations was appropriate, since predicting and calculating availability for complex subsystems is not an exact or precise process.

2) Refining the allocations beyond the order of magnitude measures adds little value to the design engineer.

3) The excess .003371 provides management reserve to address unexpected developments that may occur during the Phase II analysis. Within the management reserve an additional 3 “low availability” subsystems and 3 “high availability” subsystems could be added and still meet the overall Ground Systems 99% launch availability requirements. This also provided reserve for subsystems that could not meet their allocated requirements.

Phase I was completed when allocated launch availability requirements were approved and baselined by GOP decision makers. The initial baseline was revised over time to add and remove subsystems, as required, as the Project and the associated designs matured.

III. Phase II – Subsystem Analysis

When approved probability of launch requirements were formally allocated to the subsystem level, the analysis effort began in order to verify compliance with the requirements. Requirements verification language specified the use of quantitative analysis techniques to assess and validate compliance with the overarching probability of launch requirements. In constructing the analysis methodology, the GOP RMA team envisioned the following key outputs of the analysis and the associated products:

1) A quantitative estimate of subsystem reliability (or availability for systems that could be repaired within the critical time period) for the critical time period with a 95 percent confidence interval.

2) Clear documentation of the analysis assumptions. For example, if the subsystem analysis assumed that a launch countdown would continue if one of two redundant paths failed, the assumption would need to be further validated and potentially evaluated within the Launch Commit Criteria process.

3) An assessment of shortcomings or potential improvements in subsystem predicted performance early in the design in the process, when adjustments are easier to make and are less costly.

4) An initial look into potential logistics support priorities, understanding that a more detailed maintainability analysis would follow in the Phase III analysis.

These key outputs were envisioned to support informed decision making as new design subsystems were developed. Additionally, several legacy subsystems were assessed launch probability requirements as they would be required to support Constellation launch operations. Therefore, Phase II launch probability analysis would inform decisions regarding design alterations to both new and legacy subsystems. In addition to design changes, alternative method to improve launch probability would be considered, such as adjustments to operational or procedural concepts, or adjustments to the launch availability requirement within available trade space, still meeting the overall Ground Systems availability requirement.

The GOP RMA team evaluated a number of tools and techniques to meet the analysis requirements. Discrete Event Simulation (DES), Probabilistic Risk Assessment (PRA), and classic reliability and maintainability techniques were all considered. In order to produce the key outputs described above, the clear choice in developing the RMA team's approach was to apply classic reliability and maintainability techniques.

Recognizing that KSC's ground systems were both highly complex and most had some built in redundancy or stand-by features, the more simplistic parts counts methodologies would not produce accurate reliability estimations. Parts count methodologies essentially assume that all parts exist in series and that any failure will cause system failure. Therefore, a more rigorous methodology was required that accurately modeled the subsystem functionality and redundancy was required. The right tool for the job was the Reliability Block Diagram (RBD).

Coincidentally, KSC's Integrated Design and Assurance System (IDAS) project provided an excellent source of information, support, and actual tool suites to address a wide variety of reliability and assurance activities. The IDAS web site explains that, “IDAS shares and supports tools that perform technical analysis for the design, system, safety, mission assurance and sustaining engineering functions over the life cycle of a system. In addition, IDAS collects and shares information that helps the engineer or analyst to learn and apply the tools and techniques.” [2] IDAS also provided access to the Relex software suite (a name derived from Reliability excellence) which delivered a broad spectrum of design, development and life-cycle RMA analysis tools. XX shows the Relex software modules and the support services. Relex software was readily available to KCS users through the Center network, along with user support, training, and technical resources through the Center's support contract with the Relex vendor.

A. Analysis Tool Background

The Constellation GOP RMA team primarily uses the Relex suite in support of the probability of launch availability and maintainability analyses. In this effort, the most commonly used Relex modules are the Reliability Prediction and Reliability Block Diagram modules[6]. The GOP RMA team also uses the Weibull module to develop failure rates…<<MARK>> In order to understand the analysis process and the underlying methodology, a brief primer will be useful to set the stage for the subsequent discussion

Reliability Block Diagram (RBD) techniques form the foundation of the GOP launch availability and maintainability analysis. An RBD is a symbolic logic model that depicts system functionality and operates in the success domain.

Each RBD has a specific start and a specific end point. Blocks contained within the RBD represent components of the system. Each block may represent an individual piece, such as a resistor or screw. Blocks may also represent components at a higher level, such as an entire automobile engine or a complete pump, if sufficient reliability (and repair) data is available. Each RBD block captures the failure and repair parameters of each element within the system.

RBD blocks are connected functionally to replicate the system's operational characteristics. Blocks are connected in series if each element is required for the system to successfully operate. Parallel branches are used when only a subset of the depicted branches is required. This would be used when only one of two (or two of three, etc.) parallel branches are required to successfully operate the system.

Table XX depicts several representations of simple RBD configurations and the associated reliability calculation formulae.[3]

Type Branch

Bock Diagram Representation

System Reliability Calculation #




RS=1-(1- RA)(1- RB)


RS=(1-(1- RA)(1- RB)* (1-(1- RC)(1- RD)


RS=1-(1-RA* RB)*


#Assumes that all components function independently of each other

Table XX. Simple RBD Construction

The second key Relex module used in the GOP launch availability effort was the Reliability Prediction module. This portion of the software shares data with many other Relex modules, including the RBD module. The Reliability Prediction module was used to capture and store failure and repair data for parts, components, and assemblies used in an associated Reliability Block Diagram (RBD).

The Relex Reliability Prediction tool can develop parts listings from user input data or from parts libraries such as MIL-HDBK-217, Telcordia and CNET for electronic parts, or from the Reliability Analysis Center's handbook NPRD-95 for non-electronic parts, or NSWC-98 “Handbook of Reliability Prediction Procedures for Mechanical Equipment.” These capabilities allow the user to efficiently develop a complete parts library for the specific system based on a variety of different sources and techniques. The Prediction library also supports multiple failure and repair distributions.

Since the Reliability Prediction module shares data linkage with the RBD module (and others), components in the parts library can be quickly and consistently pulled into the RBD as it is developed. This feature improves the ease of RBD construction and the accuracy of the RBD data since a single part in the library may be used multiple times in the system being modeled. The ability to drag a part from the library and drop it into an RBD on the same computer screen was a much appreciated feature.
B. Analysis Methodology

The GOP RMA team initially encountered a significant amount of healthy skepticism early in the project. Throughout the initial allocation process a number of concerns were voiced by the various stakeholders. The most frequent concerns were:

1) “Meeting these requirements will drive cost through the roof.”

2) “The design teams are already overtaxed. This RMA work will create additional burden on the design teams and detract from real work within the design effort.”

3) “There's no way we will ever meet this requirement for 99.99% reliability at the subsystem level.”

4) “We think you did the math wrong on the allocation process.”

Through several weeks of discussion, stakeholders developed a better understanding of the analysis objectives and the RMA team developed a better appreciation for their concerns. Accordingly, a methodology was developed that was focused on achieving the following objectives:

1) Introduce the RMA team as an embedded member of each design team and as a resource to the design team.

2) Minimize the time impact on design team by developing understanding of the design within the RMA team from available resources and use the design team for clarification or confirmation that the model and underlying assumptions were correct.

3) Link the RMA analysis to the design review milestones, wherever possible and include the Launch Availability analysis report as a reviewable document within the design package.

4) Provide feedback to the design team, such as reliability improvement recommendations, throughout the design process and deliver no “surprises” to the design team in the final analysis. This included supporting the design effort by evaluating alternative solutions from a system reliability perspective.

In execution, these objectives were largely achieved by following a similar process through each subsystem analysis. First, an analysis schedule was developed based on the subsystem design review schedule. Launch availability analyses supported the 60%, 90% and 100% design reviews for each subsystem with an allocated probability of launch requirement.[7] Each analysis was documented in a peer-reviewed report. The analysis followed the following general process:

1) The design package was made available to the RMA team electronically through NASA DDMS/Windchill.

2) The RMA reviewed the design package to become oriented with the subsystem functionality, operations concepts and the specific design. The following documents and data sources within the design review package were assessed within the launch availability analysis.

Operational Concept Documents
System Assurance Analysis (SAA) – which included fault trees and hazard analysis
Parts information and listings
Logistics Support Analysis (LSA)
Subsystem training plans
Lessons learned reports
Procurement specifications
Subsystem Requirements Documents
Interface diagrams and tables
Launch Commit Criteria documentation

3) Based on the integrated understanding of subsystem functionality, operating profile, and risks of the design package, the RMA team would decompose the subsystem to an appropriate level, develop functional flow diagrams, and produce initial parts listings specific to the design. The flow diagrams reflected the operational usage, system layout, connectivity, and redundancy schemes, and also formed the basis for subsequent RBD development. Frequently, several functional flow diagrams would be required to capture the necessary scope of the subsystem.

4) With an initial understanding of the subsystem operation, the RMA team would meet with the design team to confirm that there was a correct understanding of subsystem operations, confirm and revise functional flow diagrams, resolve questions, review the parts listing as required and determine if any subsequent design changes were in work for the design release. These initial meetings normally lasted an hour or two. The knowledge of the design team was instrumental in accurately capturing how the subsystem operates, which components need to be included in the reliability analysis, the associated failure data, and how to best map the subsystem configuration in the Reliability Block Diagram (RBD).

5) Building on the knowledge developed and a common understanding (with the design team) of the subsystem operation, layout, components and assumptions, the RMA team refined the parts list and the associated failure and repair data for each modeled component or assembly. This information was catalogued in the associated Relex Prediction Module for the subsystem. Failure and repair data was compiled using the following information sources to determine the most accurate and most applicable data:

Manufacturer's data for the specific part
Failure data develop from like-comparison failure histories
Relex parts libraries
Other reference materials such as <<MARKS POWER BOOK>>
Test data

Reliability prediction techniques

6) RBDs modeling the subsystem were then developed using the information from the functional flow block diagrams and the reliability and repair data contained for each part in the associated parts library in the Prediction module. All components analyzed within the RBD were considered to be operating at optimum level and conditions until a failure occured. The configuration of the component within the RBD identified if the system success was dependent on one or more component failures. The blocks of the RBD may represent individual components or component substructures, which in turn may be represented by other RBDs. The complexity of the RBDs is dependent upon various factors such as mission profiles, function criticality, and redundancy characteristics.

7) Initial estimates were developed using the RBD module Monte Carlo simulator for the 10 hour period of concern and a 95% confidence interval. Normally, one million Monte Carlo simulations were executed. The results were examined and peer reviewed by the RMA team to verify that all connections were correctly made, the correct parts were in the correct locations, and that the RBD functioned as depicted in the functional flow diagram.

8) Initial observations were developed and shared with the design team during a second feedback session. RMA team observations shared with the design team frequently included:

Reliability improvement recommendations
Drawing corrections
High failure rate nodes within the design
GIDEP alerts on parts specified for use
Obsolete parts specified for use

9) The analysis report was then developed in support of the design review schedule. A documentation scheme was developed that captured the RAM Requirements Compliance Verification Process as specified in CxP 70087 (Constellation Program Reliability, Availability, and Maintainability Plan) and NASA's six step process.

10) After peer review and further coordination with the design team, the report was loaded into the design review package as a reviewable and commentable document.

C. Launch Availability Analysis Observations

The first subsystem that was analyzed failed to meet its allocated requirement of .9999 for the 10 hour period. The subsystem delivered .9995 reliability. The RMA team noted several single point failures and developed a table of suggested design revisions and the associated reliability improvement of each change. The design team went through several iteration of updating the design with support from the RMA team and produced a final 60% design that was estimated at .999999 (six nines) reliable – over two orders of magnitude improvement from the original design. This improvement added no additional cost to the original design. The result was due to challenging the assumptions of the design from a quantitative reliability perspective and working together to optimize the design. In fact 10 million Monte Carlo simulations were required (over the normal one million) to achieve a failure within the critical time period.

As with most Monte Carlo simulations, the random variable was adjusted to ensure repeatable and consistent results. This data was recorded in the launch availability analysis report.

I'm thinking a table of number designs evaluated versus results

IV. The Maintainability Requirement

As the launch availability methodology was refined, the GOP RMA team developed a second methodology to assess subsystem maintainability and compliance with the requirement to that in the event of a failure, ground systems must be able to repair 30 percent of the failures and support readiness for launch within an acceptable time period. This requirement was flowed directly to each ground subsystem with an allocated launch availability requirement.

The methodology to assess maintainability leveraged the subsystem RBD already developed under the launch availability analysis. If the RBD could be leveraged to show the relative likelihood of the various failure paths, then repair scenarios could be evaluated to correct the faulty within the required time. Fault Tree analysis uses a similar technique called cut set analysis. A cut set is a unique combination of component failures that can cause an overall system failure. A cut set is said to be a minimal cut set if, when any basic event (failure of a component) is removed from the set, the remaining component failures (events) collectively are no longer a cut set. Minimal cut sets can be used to understand the structural vulnerability of a system. The longer a minimal cut set is, the less vulnerable to the system is to that combination of events. Numerous cut sets indicate higher vulnerability. Cut sets can also be used to discover single points of failure or one independent component of a system that when it fails can cause the whole system to fail.

The Relex RDB module delivers the ability to produce cut set analysis from an RBD. The output provides quantitative values of unavailability calculated from the combined unavailability of elements within the cut set. This is output from an RBD (success space) reflecting results in failure space. For example, if a single element or component were to cause the system to fail, the associated unreliability would be calculated as:

Therefore, cut sets derived from an RBD can be used to determine each failure path that can cause the system to fail and the combined unreliability of those components within each cut set. Since this is a calculated value based on the failure data for each component (retained in the RBD and the parts library in the Reliability Prediction module), the relative unreliability of each failure path can be calculated and the composite cut set listing can be rank ordered from most likely to least likely to occur. Additionally, since the unreliability associated with each cut set is a calculated value they can be readily compared within the subsystem, and since each subsystem could, in itself, create a hold or scrub if it failed, cut sets can be compared and ranked across ground subsystems.

A. Cut Sets - Easier Said Than Done

The complexity of KSC's ground systems requires developing very sophisticated RBDs. Some complex subsystems were modeled with over 3,000 blocks. In order to organize such systems, Relex RBD allows the user to develop “linked diagrams” within an RBD. This allows a top level outline level RBD to decompose into one or many linked diagrams where lower levels of detail are developed and displayed. This technique does not create problems with Relex RBD reliability or availability calculations. However it does create problems with developing integrated cut set results within complex systems that use linked diagrams.

The GOP RMA team observed that the Relex software would not calculate cut set results for linked diagrams. The RMA team brought this issue to Relex to resolve. In the meantime, a more labor intensive work-around was successfully developed to complete the maintainability analysis process.

B. Cut Set Analysis Results

I'm thinking a table with systems that we have done – how many total cut sets and how many cuts sets were in the 90%

C. Wrapping Up

How RBD and cut set analysis combined to answer the mail on the two requirements – but also improved the design along the way

Also how – as we gained credibility – we were used to support other analysis

V. Conclusion

A conclusion section is not required, though it is preferred. Although a conclusion may review the main points of the paper, do not replicate the abstract as the conclusion. A conclusion might elaborate on the importance of the work or suggest applications and extensions. Note that the conclusion section is the last section of the paper that should be numbered. The appendix (if present), acknowledgment, and references should be listed without numbers.

Appendix A

Acronym List

Appendices, if needed, should appear before the acknowledgements. For readability in this template, we have not started a new page at the beginning of an appendix. For your document however, you should insert a page break before each appendix.

One good candidate topic for an Appendix is an acronym list. If a manuscript has a large number of acronyms, an appendix with an acronym list is recommended, but it should not exceed one page. An acronym list is best formatted as a table. The following acronym list is a partial example.


Atmospheric Boundary Layer Experiment


Apogee Boost Motor


As-Built Parts List


Oxygen A-Band Spectrometer


Alternating Current


Atlas Centaur (Intermediate class expendable launch vehicle)


Attitude Control and Stabilization


Attitude Control Assembly (Space Station)


Attitude Control and Determination
Appendix B


Another good candidate topic for an Appendix is a glossary. SpaceOps encourages a short glossary for each paper. The glossary is not counted against the page limit for the paper, but it should be very short, less than one page. If the glossary definition is quoted from another source, the source should be cited in references. The glossary is also best formatted as a table:

Ascending node

The point at which an orbit crosses a reference plane (such as a planet's equatorial plane or the ecliptic plane) going north.


Downlink from a spacecraft that immediately indicates the state of the spacecraft as being one of several possible states by virtue of the presence and/or frequency of the subcarrier. See Chapter 10.


Unit of ratio equal to ten decibels. Named in honor of telecommunications pioneer Alexander Graham Bell.

Gravity assist

Technique whereby a spacecraft takes angular momentum from a planet's solar orbit (or a satellite's orbit) to accelerate the spacecraft, or the reverse.


A direct, circular, low inclination orbit about the Earth having a period of 23 hours 56 minutes 4 seconds.

A tip of the hat to Tim Adams??The preferred spelling of the word “acknowledgment” in American English is without the “e” after the “g.” Avoid expressions such as “One of us (S.B.A.) would like to thank…” Instead, write “F. A. Author thanks…” Sponsor and financial support acknowledgments are also to be listed in the “acknowledgments” section.


The following pages are intended to provide examples of the different reference types, as used in the AIAA Style Guide. When using the Word vers

American Institute of Aeronautics and Astronautics

[1] KLXS Operations Lead, Mail Stop: SAIC-LX-4

[2] KLXS RMA Analyst, Mail Stop: SAIC-LX-4

[3] PhD., Senior RMA Analyst, Mail Stop: SAIC-LX-4

[4] Technical Manager for Operations and Integration, Mail Stop: LX-I

[5] Although two iterations of critical time period duration and a some changes to subsystems included in the analysis, for consistency, the final critical time period value of 10 hours and the final configuration of subsystems is used throughout this paper.

[6] With the release of Relex 2009, the Reliability Block Diagram functionality was folded into the OpSim module. For ease of understanding we use the term RBD module.

[7] Not all subsystems followed the 30%, 60%, 90% 100% design review process. A few subsystems deviated with other design review milestones such as 45% and 90%.

[1] GRANT Shuttle data 86%

[2] Adams, T., “IDAS Resources” URL:

[3] NASA System Engineering Tool Box, section 3.5.1, Table 3-4, page 3-30

Please be aware that the free essay that you were just reading was not written by us. This essay, and all of the others available to view on the website, were provided to us by students in exchange for services that we offer. This relationship helps our students to get an even better deal while also contributing to the biggest free essay resource in the UK!