Software metrics for the design



Real-time systems are vital systems. All systems are special cases of Real-Time Systems. Real-Time Design Metrics are challenging, central components of high quality and patterned real-time systems' engineering field. The design phase is the most influential phases of the lifecycle. Research done in this phase of the lifecycle enables quick troubleshooting and is also cheaper (as the real-time software isn't coded at this stage). This thesis presents a proposal of metrics for Real-Time Systems' Design Phase. The eight metrics that have been proposed are: Flow Control Metric, Trend of WCET Metric, Basic Cycle Time Metric, RTA Structure Metric, Message Activation Rate Metric, Task Activation Metric and Upper Bound Metric. These metrics have been analyzed for change in values and the corresponding trends have been presented. Also this thesis has catered more to the non-functional requirements of RTS. The ones that have been taken into consideration particularly are: Fault Tolerance, Performability, Timeliness, Quality, Performance, Efficiency and Predictability.


This thesis is the final work of my Master's study at the Department of Computer Science and Engineering, University of Engineering and Technology, Lahore. It serves as a documentation of my work during the study, which has been made during my Master's study.

This piece of writing consists of 8 chapters: Introduction, Literature Survey, Problem Statement, Proposed Solution, References, Results and Discussions, Conclusion, Future Recommendations and References. The titles of each chapter are self-explanatory and give insight into their contents.

The motivation to write in this active area of research stems from the fact that the design phase of any project is its most vital and wide-spread phase. If the problem can be diagnosed, removed and re-evaluated at the design phase itself, then the cost of production of the projects falls drastically. To aid in this problem prognosis, we have employed the knowledge of metrics. Moreover, every system is a special case of Real-Time Systems, so research done in the field of Real-Time Systems indirectly caters to a wide range of audience.

As Robert Frost rightly said, 'The artist in me cries out for design.'

We have also built up our research from the knowledge of other people who have already worked in this area for a long time. As Socrates advised, 'Employ your time in improving yourself by other men's writings so that you shall come easily by what others have labored hard for.'


Firstly, my heartiest gratitude to Allah Almighty Who has blessed us with the power of knowledge and wisdom to accomplish this work.

Thirdly, it is an honour to be able to express my gratitude whole-heartedly to Ma'am Shazia Shoaib, Assistant Professor at UET Lahore's Dept. of Computer Science and Engineering. All those late hours of mind-grinding writing of the chapters and the mugs of hot coffee gulped to keep our eyes wide-open: my thesis wouldn't have materialized without all the moral support you lent to me. Thank you.



1.1 Overview

Real-Time Systems have been widely employed in today's world. We encounter them subconsciously and like all other computer applications, they do make our life easier also. What makes these systems differ from other systems is the time constraint-RTS (Real-Time Systems) are time-bound. This doesn't apply that these systems are 'always' expected to be super-fast. They simply have to match the timing constraints that have been specified in the design phase. More insight about this interesting behavior is presented in the next chapter.

1.2 Motivation

What made us get attracted to this active area of research is that every system, if studied closely, is a loosely-constrained version of a Real-Time System. So research done in this area can be helpful to all systems under software engineering. Moreover, the design phase was chosen to be researched upon. We saw a lack of substantial research done in this area and we saw the need for improvement. Thus this thesis makes an attempt to devise meaningful metrics pertaining to the design phase of Real-Time Systems. The focus will mainly be on looking out for the non-functional requirements of Real-Time Systems.

1.3 Contribution

We have attempted to devise meaningful metrics by carrying out extensive research on the available literature pertaining to the design phase of Real-Time Systems. These metrics will then be analyzed with different values and then the changing trends in these metrics will be noticed.

1.4 Organization of Thesis

The rest of the thesis is organized in the form of eight chapters. Chapter 1 is about the Introduction.

  1. Chapter 2 pertains to Literature Survey.
  2. Chapter 3 describes the Problem Statement of our study.
  3. Chapter 4 lays out the Proposed Solution.
  4. Chapter 5 gives insight about the Results and Discussions.
  5. Chapter 6 gives the Conclusion.
  6. Chapter 7 gives the Future Recommendations.
  7. Chapter 8 enlists the References.



A real-time supplication (RTA) is an application program that has the responsibility of carrying out its functionality within a time-slot that the user senses as instantaneous or prompt. The latency or delay cannot be more than a few seconds. Delay would result in a failed system. This failure can be life-threatening too in some scenarios. The qualification of a system as a real-time system or not depends largely on the value set for worst-case execution time (WCET) and whether that value is being met or no. WCET is the time to which we can allow the task(s) to carry out functionality before handing it over to the next. This maximum extent is known as WCET and is applied at both the hardware and software platforms in view [1]. Usage of RTAs is termed as real-time computing (RTC) [2]. These systems comprise of calculations that rely on the correctness of results as well as timely delivery of responses for being called as successful systems. Fulfilling both of these clauses proves to be an added responsibility.

A metric is a measure of some property of an entity. You cannot control anything unless you measure it. The objective of this thesis is to discuss the design metrics for the software phase of real time applications.

A Real-Time System (RTS) is a system in which the aim is to get a timely, correct answer. We head towards correctness of computations: logical correctness as well as timely output. Lateness, from a pre-defined time-value, under any circumstances is intolerable.

To make matters clear, let us take an example of a Real-Time System. We will consider an assembly line's example at an automobile factory. Robots have taken over the role of humans in this particular field, which calls for the employment of Real-Time Systems for this task. Each part has to be attached to a moving chassis. If the assembly moves slower than a pre-determined value (due to any reason), the parts wouldn't get screwed on at the appropriate places. The assembly line going too fast would result in similar mismatched consequences. Stopping the assembly line would turn out to be a costly operation. Therefore, the range of motion of the chassis teamed with the rapidity of the assembly line would allow for a window of opportunity to screw on/attach the parts onto the moving chassis.

2.1 Specific Features of Real-Time Systems

The major characteristics of real-time systems include the following [6] :

Timeliness is important. The system functions within the pre-specified time-constraints.

It ought to be reactive. Rapid responses to inputs form the external environment that drive the system are demanded.

The concurrent execution of threads of control is vital as different parts of the software run in parallel simultaneously.

It usually has very high requirements on most of the non functional requirements, such as reliability, fault tolerance, performance etc..

It ought to be non deterministic.

It also ought to be deadline-driven.

2.2 Period VS Deadline

The beginning of this chapter did lay down the importance of timeliness for Real-Time Systems. These time-constraints come on two forms: periods and deadlines. As is stated in [7].

Suppose parts to be attached to the chassis on the assembly line pass by it at a rate of one per second. This means that a new chassis shows up every 1000 ms. The period of the task thereby becomes 1000 ms. Point to be noted is that in both cases, whether the chassis pass once per second or a thousand times per second, this kind of a system will be a Real-Time System indefinitely. Real-Time does not necessarily depict rapidity, it simply indicates that a system has timing constraints that must be met (with correct responses) to avoid failure.

The deadline is a cut-off value, a constraint beyond which an operation cannot be allowed to carry on. For instance, if the window of chance/opportunity is 150 ms, the deadline then is also this same value:150 ms, after the commencement of the operation. In our example, the commencing time is defined as the time when the chassis enters the range of the screw-on machine. This chassis example has physical constraints, like the speed of the belt out of which the assembly line is constructed and the screw-on machine's screw-on motion and time. These factors also influence the period and deadline of the task.

Another example we present here is that of communication systems; they too have real-time constraints. Communication systems are like day-to-day conversations; a delayed answer causes frustration. Now let us suppose that a multimedia application needs to compress an audio stream of data at a rate of 60 frames per second. Before a new frame is to be processed, the old one should finish processing. Hence, whenever the next frame begins, that moment will be the deadline. [7]

2.3 Aperiodic Tasks

Aperiodic tasks/ Aperiodic servers respond to randomly arriving events. This is contrary to the periodic tasks which occur at regular intervals. Consider the example of the automatic braking mechanism installed in cars nowadays. This braking mechanism is supposed to start braking the car as soon as the brake is applied by the chauffeur. Even a moment's delay is unacceptable as that could lead to an accident. To put this kind of real-time system in place, tiny microcontrollers have to be employed, as they save the overhead and delays. This is a perfect example of an aperiodic task. The difference between aperiodic and periodic tasks is analogous to the difference between interrupt-driven and polling tasks. Like interrupt-driven tasks can be converted to polling tasks, similarly aperiodic tasks can be converted to periodic ones. In this case the system will have to continuously check for the occurrence of an event. The software is responsible for checking out how frequently the iterative checking needs to be carried out. When the event in queue is encountered, the respective calculation is carried out to complete its operation.

2.4 Hard or Soft?

Every real-time system falls under the wide domain of either being 'soft' or 'hard'. When we say that the system is hard, we mean that the timing constraints are strictly to be met. These are usually those systems where a slight miss from the prescribed deadline cannot be accommodated as it could result in loss of lives or death. Then we have the other extreme, known as soft real-time systems. These systems can tolerate a considerable delay, but once again the delay shouldn't last 'forever'. In between both of these systems lie firm real-time systems. The distinction between these systems is somewhat fuzzy, so we can say that these systems actually span a spectrum and each system fits somewhere in between this spectrum. This is also shown diagrammatically in Figure 1.

The Real-Time Spectrum

A real-time system qualifies as a hard one when it qualifies the timing requirements set onto it and also provides correct responses. Otherwise these systems are said to be failed systems. This failure can be in the form of lost finances, loss of equipment, death of the user(s) or operators and so on. These kinds of losses are intolerable and so it can be well-understood that handling these hard real-time systems with utmost care is vital. One example of such a tight time-constrained system is an automatic flight controller or navigator. The response to the inputs (usually coming from an external environment) has to be correct and within the WCET. Then only the lives of the people on-board will be safe and the airplane or helicopter will be able to reach to the destination safely. Irresponsibility in handling this kind of a system is certainly unacceptable.

Contrary to this, a soft real-time system is one in which the timing requirements are not as constrained as that of a hard real-time system. Missing a certain timing constraint from time to time has only negligible effect on the overall application's performance. What needs to be understood in this scenario is that once a deadline is missed, the previous value stored can be rolled onto and used for further processing. The results wouldn't be as accurate but the system will be functioning properly, with a slightly degraded performance though. The point to be noted here is that if a lot of deadlines are missed then the error-tolerance will be out of range and problems can be caused. [6]

2.5 Predictable VS Deterministic

Two more terms used avidly in the description of real-time systems are predictability and determinism. Saying that these terms sre not linked at all would be a lie. This slight similarity that exists between them causes confusion. We will attempt to resolve this ambiguity by providing with the explanation that follows.

Three design parameters, viz period, deadline and WCET need to be known beforehand (at the design phase) to be able to make the system a predictable real-time system. This predictability refers to the fact that everything should be known and guaranteed about the system beforehand. To achieve this state the system is broken down right to the atomic level and then the states are analyzed. Next comes the suitable choice of a scheduling algorithm and detailed analysis which will ensure predictability of a system.

Determinism is a special case of predictability. For determinism the timing requirements are not only known beforehand but they can also be pre-determined. This means a system can actually be designed in such a way that the time constraints can be determined beforehand for every task that will occur in the system to be developed. The good news is that for real-time systems determinism is preferred but is not a vital component. Since determinism is difficult to achieve, so this loophole is more than welcome by the designers of real-time systems.

2.5 Time-triggered VS Event-triggered

Event-triggered designs give a faster response at low load but more overhead and chance of failure at high load. This approach is more suitable for dynamic environments, where dynamic activities can arrive at any time.

2.6 Design Lifecycle of any System(s)

The SDLC- system development lifecycle is the methodology adopted when developing any system in the computing domain. The collection of information marks the beginning of the SDLC, the end is evident by the final delivery of the product to the end user.

The literature mentions an inverse waterfall sort of model to describe the design of a typical system. This is explained comprehensively in detail in a study guide for Information Systems [10] and also in [8] and [9].

The V-Design Model of a General System [10]

This process is known as the "V" model of systems development.[10] At each testing stage (see diagram, above), the corresponding planning stage is referred to, ensuring the system accurately meets the goals specified in the analysis and design stages.

If seen, analytically, we will realize that any system when needs to be designed goes through the following phases [12]:

The Birth of the Concept

This is the first phase of design and as the name suggests, it marks the beginning of identifying the needs of the end-user. Its major steps include the following:

Need Identification

Feasibility Analysis

System Requirements Analysis

System Specification

Conceptual Design Review

We will outline each of these sub-phases of conception. Need Identification enlists the needs of the user. This is usually given in plain English and is noted down by the person in charge of collecting information about what are the user's demands. Once these needs are identified, the software engineer heads towards performing two important analyses: the Feasibility Analysis and System Requirements Analysis. The System Specification comes next which highlights the technical requirements of the system to be developed. As the technical requirements are chalked out here, it helps in the design later on. The last part of this phase is the Conceptual Design Review. This is important because any good document or design is incomplete without performing a review of the thing devised. This review enables the outline produced here to be re-analysed from the perspective of an expert.

Prelim System Design

This is the second phase of design and it marks the beginning of identifying the steps involved in carrying out the system design. Its major steps include the following:

Functional Analysis

Requirements Allocation

Detailed Trade-Off Studies

Synthesis of System Options

Preliminary Design of Engineering Models

Development Specification

Preliminary Design Review

In this phase subsystems are identified which will be responsible for the structure of the system. Then the interfaces between these subsystems are identified. The testing requirements and the evaluation criteria are also laid down here. To mark the end of this phase a Development Specification is produced which equips us with enough material to head towards the detailed design and specification phase.

The Detailed Design and Development

This is the third phase of design and it marks the identification of the details of design and development. Its major steps include the following:

Detailed Design

Detailed Synthesis

Development of Engineering and Prototype Models

Revision of Development Specification

Product, Process and Material Specification

Critical Design Review

This phase simply leads all the things already defined, like the subsystems and the interfaces, into a detailed format. The envisaged environment is created and also evaluated, judged for the maintenance cost that can be incurred. How much and what kind of support is needed is also determined at this phase. If the developer needs to make some changes to the requirements (in collaboration with the user) then this phase is the right time to do it. For instance, as in the waterfall model, the developer re-traces back to the specification, if need arises.

The Much-Awaited Construction

This is the fourth phase of design and it allows the actual product to be assembled by the programmer/engineer. Key steps within the Production/Construction stage include:

Production/Construction of System Components

Acceptance Testing

System Distribution and Operation

Operational Testing and Evaluation

System Assessment

In this phase the system is developed in the true sense and any changes are marked. System assessments are performed to remove errors and also to make the system adaptable to change.

Utilization and Support

This is the fifth phase of design and it caters to the identification of ways and means to ensure support for the software being developed and also to see how it will be utilized. The system is also checked to see if it will operate feasibly in the environment where it will be finally deployed. The important steps in this phase are as follows:

System Operation in the User Environment

Maintenance and Logistics Support

System Modifications for Improvement

System Assessment

Phase-Out and Disposal

This is the last phase of the development lifecycle. The efficiency of the system is tested after installing in the environment where it will be up and running for a long time. Any errors or complaints that surface are catered to. The design engineer looks out for any bugs, any more operations that need to be added-in, matching between operational requirements and system performance and availability of alternative systems.

Waterfall Lifecycle Model, the Spiral Model and all other such models more or less follow this same outline of developing a system. We have seen here the general design strategy and now we head to see how does the design phase differ for Real-Time Systems, as all this high time constraints cannot be catered to by the normal design strategy presented here.

2.7 Design Lifecycle of a Real-Time System

Reactive and real-time systems involve concurrency, have strict requirements, must be reliable, and involve software and hardware components [4].

These systems respond to the physical environment, at a speed determined by the environment. This class of systems has been introduced to distinguish them from transformational systems (input, process and output). Reactive Systems include, among others, telephones, communication networks, computer operating systems, man-machine interfaces, etc.

Real-time Systems (RTSs) have reactive behaviour. An RTS involves control of one or more physical devices with essential timing requirements. The correctness of an RTS depends both on the time in which computations are performed as well as the logical correctness of the results. Severe consequences may occur if the requirements of a real-time system are not met. Requirements from an RTS are diverse, ranging from intricacies of interfaces to providing guarantees of safety and reliability of operation.

Real-Time Systems have the following characteristics [6],

They are high in non-functional requirements, viz reliability, fault-tolerance etc.

They are timely, performing within the specified time.

They ought to be reactive.

They ought to handle the execution of threads concurrently.

They ought to be non deterministic.

They ought to be deadline-driven.

These characteristics lead to a difference in the design model, viz each design phase is validated by simulation or verification before going on to the next phase [14].

The Design Lifecycle of a Real-Time System [14]

J. F. Peters and S. Ramanna [13] pointed out the need for a different design strategy for real-time systems by stating that for real-time systems, the original SDLC (System Design Lifecycle) undergoes some changes. They stated that real-time relevant logic must be incorporated at the design phase itself. Thus at the design level it is vital to take into consideration factors which can make the design better via metrics.

2.8 Contrasting the Difference in Design

Ramanna and Peters [13] have further contrasted the designs and given suggestions how to remove the flaws from the orthodox design strategy. This fact is explained diagrammatically below.

The IEEE System Lifecycle Model [13]

The Enhanced IEEE System Lifecycle Model for RTS. [13]

The 2 diagrams given above are self-explanatory and give a good deal of insight into the strategy adopted.

2.9 Issues in Real-Time System Design

The design of real-time systems proves to be a daunting task. The root-cause of concern stems from the fact that real-time systems have to intermingle with real-world components. All such interactions pose problems of synchronization and other issues as well. Moreover, the interactions are not limited to just one entity but a number of entities have to be dealt with simultaneously. Let us take the example of a telephone switching real-time system, which is designed to handle incoming calls. In this particular system the calls have to be handled independently. Moreover, the calls might take place in any random order, so the pattern cannot be pre-determined.

These issues and scenarios will be discussed in detail in this section [15]...

Response Real-Time Behaviour

Ability to Recover from Failures

Mechanism to Work with Distributed Architectures

The Ability to Communicate Asynchronously.

Racing Scenarios and Timing Issues.

2.9.1 Response Real-Time Behaviour

Real-Time Systems stand on different grounds than other systems only because they ought to provide responses to all interactions with the environment as soon as possible, within the time-slot allotted to them. The response thus received could only be termed to be useful if it would be correct (in value) and within the time slot allowed to it. Delay is simply intolerable and causes the system to fail. What needs to be kept in mind is that both the hardware and software have to be designed keeping in mind the real-time requirements stated. For example, a telephone switching system must respond to thousands of callers/subscribers within the pre-determined time-slot, usually one second or less. To meet these requirements, the sub-systems involved- truncation of call and software communication- have to work in accordance with each other so that the timing requirements can be met. In addition to this all these timing requirements have to be met for any calls set up at any time.

These real-time requirements have to be incorporated very early into the design of the system, in fact right from the architecture design phase these timing constraints are taken into consideration. The hardware and software engineers work in collaboration with one another to achieve these goals. They make the choice of the optimum architecture. The simpler the architecture, the more capable it is to handle the time constraints.

Other things that are taken into consideration are also discussed here. We are asked what kind of processors would be suitable. What kind of speed would help meet time timing requirements? What link speed should be chosen for suitable communication? If the link speeds chosen are not appropriate then queues are built-up and delay can be caused in message transmission. The link speeds should not be more than 40-50% of the total bandwidth.

Another question that is asked is what kind of communication is preferred? Does it have any nodes? What is the CPU utilization speed? The answer lies in choosing powerful and optimum processing components. Both link and peak CPU utilization should be below 50%.

Also the OS suitability has a big question-mark hanging over it. Choosing the right operating system is of utmost importance. Also tasks of critical real-time requirements need to be handled by giving high-priority execution at the operating system level. Also the methodology of pre-emptive scheduling can be used to give importance to the critical tasks. Modes of handling interrupt latency as well as scheduling variance need to be verified at this stage. Scheduling variance refers to predictability in task scheduling times. Interrupt

2.9.2 Ability to Recover from Failures

Realtime systems must be able to eliminate faults and errors on their own.

2.9.3 Internal Failures

Both hardware as well as software can be the home to internal failures. The different types of failures generally encountered are discussed below:

Task Reporting a Software Failure:

Real-Time Applications cannot rely on the traditional technique used by systems to remove an error. They cannot make a dialogue-box pop-up and then display the error. Nor can they wait for the error to be removed by the user. Real-Time Applications have to use what is known as roll-back conditions, particularly when a task hits a processor exception. The system is simply advised to roll-back into the previous, correctly-functioning saved state. The tasks simply have to be designed to be safeguarded against error conditions. In Real-Time Systems this becomes of crucial events as a series of events in turn trigger another set of events. These new sets of events maybe spontaneously formed and thus all of these cannot be tested in the review section.

Restarting of the Processor:

Real-time systems are comprised of more than one node, all put together under the command of a real-time executive. Now in some cases one of the nodes malfunctions, due to any reason, the entire system cannot be shutdown in lieu of that node. The software's design should be such that it should be capable of handling such single node(s) failures. Two activities come into being due to this:

Managing and Recovering the Failure of Processor(s):

Whenever a processor in operation fails, the methodology adopted is that all interactions with the failed processor(s) are ceased. Then the affected processor is either repaired, or shut-off. Meanwhile the job of the processor is handed over to one of the other processors in operation. All interactions with the faulty processor are ceased immediately. In case the faulty processor doesn't have the company of other processors to help it out in these kinds of situations...then the system will make the processor roll-back into its previous state. This can cause the states of other components to be untallied with the processor, but these inconsistencies can be resolved easily bu running audits and checks.

When the board fails:

Real-time systems are expected to not only handle software failures, but also to recover from them completely. These failures also include board-failures. When a PCB (printed circuit-board) fails the system is expected to handle the failure itself and also recover by switching to another printed-circuit board, which must have been made available at the design time itself.

When the link fails:

Most of the communication in real-time systems takes place in the form of links which connects all those inter-connected nodes of the systems. Whenever a link fails, the systems re-directs the message via a different or alternate route. This mechanism saves vital information from being lost and also message communication doesn't get hampered.

2.9.4 External Failures

As discussed earlier, real-time systems have to interact with the real-world. Whenever this real-world with which they have to interact with, experiences failure of any sort, our own real-time system ought to be able to manage it. Let us have a look at different possible scenarios which can go wrong in the real-time systems and their surroundings i.e the environment.

External Components' Invalid Responses:

Real-time systems should be able to handle all sorts of malfunction conditions of the external environment. This will include the hardware problems as well as problems which occur by the end-user's inability to handle the system in a fragile manner.

Inter Connectivity Failure:

The real-time systems rely not only on their internal node structure to handle their working. They also rely on outer node structure for transfer of data to and fro, amongst external entities of the system. Handling of internal link failures is analogous to this behavior. The only difference lies in handling the re-routing of messagesin the case of external failures the lost link may take days to recover and thus the delay can be longer.

. 2.9.5 The Ability to Communicate Asynchronously

Software design can be facilitated by using Remote Procedure Calls (RPCs). This technique does sound good and brings great ease, but only for traditional systems. For Real-Time Systems it hardly provides any relief. RPCs work on a query-response theme, whereas Real-Time Systems are more event-based. The communication for Real-Time Systems is more asynchronous in nature.

Real-Time Systems can be designed using state-machine models. The advantage is that by using this model we can accommodate many messages in a single state. To which state the control has to be transferred next depends on the state of the received message. State-machine models are query-friendly towards Real-Time Systems, though they come with their own set of complexities.

2.9.6 Racing Scenarios and Timing Issues

Looking at any real-time systems' protocol, simply points out to one factor...timing. Each stage of the protocol has provisions to handle timing separately. Also, each stage of each protocol also attempts to account the timing values for the increasing load. When all these requirements are implemented, timers are used. Timers look-out for the progress of events. If the desired event finishes execution, the timer ceases to execute.

Sometimes the state of a resource is unpredictable. This is when a race-condition occurs. Two tasks compete against each other on basis of time. This condition is usually resolved by defining rules about who gets to keep the resource when a clash occurs. Devising a method to resolve such race-condition conflicts is easy, the difficulty lies in only identifying such conditions.

2.9.7 Flow Control

This refers to the synchronization of events to accommodate both the sender as well as receiver. The aim is to manage them in such a way that the receiver will follow the sender. Usually for Real-Time Systems the controlled object lies elsewhere and not within the domain of the controlling sub-system. Flow of control is necessary to synchronize events. The correlated event showers can be buffered at the interface between the controlled object and the computer system.

Several engineering solutions have been devised to control this flow of events at the interface. Some of them include low pass filters, the intermediate buffering of events in hardware and/or software and so on. It is difficult to come up with a universal flow-control schema for Real-Time Systems that can ensure that no important events are exploited by the flow control mechanism (as in the case of correlated event showers or by the use of a faulty sensor).

2.9.8 Maximum Execution Time of Programs

The deadline for the delivery of a result can be guaranteed if an upper-bound is available at the design phase itself. The upper bound value should not be significantly high, but only of an optimum value. This bound ought to be tight for two reasons: Firstly, the result should be an outcome of recent input data. Secondly, for the static scheduling system, the upper bound should not be loosely chosen as a lot of valuable resources are wasted unnecessarily.

The Real-Time Systems are usually designed so that they can run for a long, long time. The code should be able to monitor the maximum execution time of a program using language restriction. Recent trends like caches and pipelining makes this computation all the more complex.

2.9.9 Scheduling

In general, the problem of deciding whether a set of real-time tasks whose execution is constrained by some dependency relation (e.g., mutual exclusion), is schedulable belongs to the class of NP-complete problems [5]. Finding a feasible schedule, provided it exists, is another difficult problem. The known analytical solutions to the dynamic scheduling problem [6] assume stringent constraints on the interaction properties of task sets that are difficult to meet in distributed real-time systems.

2.9.10 Testing for Timeliness

In many real-time system projects more than 50% of the resources are spent on testing. It is very difficult to design a constructive test suite to systematically test the temporal behavior of a complex real-time system if no temporal encapsulation is enforced by the system architecture.

2.9.11 Error Detection

In a real-time computer system we have to detect value errors and timing errors before an erroneous output is delivered to the control object. Error detection has to be performed by the receiver and by the sender of information. The provision of an error detection schema that will detect all errors specified in the fault hypothesis with a small latency is another difficult design problem.

This thesis undertakes real-time systems and so we are set to define such systems. We will break down each term; define it individually and then head on for a complete definition.

A system is a mapping of a set of inputs into a set of outputs. Every real-world entity can be mapped as a system.

Real-time refers to the correct response provided within a pre-determined time frame.

Hence, we may say that a real-time system is one in which its proper functioning is based on the correctness of the outputs and their timeliness. Quoting Phillip Laplante [13], the definition is as follows:

Failure for a real-time system doesn't simply mean that the requirements were not met. Usually it means life-threatening situations where the simple loss of not meeting the temporal deadline defined by the designer/programmer leads to loss of lives.

According to Sommerville [16], another way to look at real-time systems is to view them as a stimulus/response system. All the inputs take the role as stimuli, and the outputs as responses. Stimuli may be periodic or aperiodic. Periodic stimuli occur at predictable time intervals. Aperiodic stimuli occur irregularly and are usually signalled using the computer's interrupt mechanism.

To make matters clear, let us take an example of a Real-Time System. We will consider an assembly line's example at an automobile factory. Robots have taken over the role of humans in this particular field, which calls for the employment of Real-Time Systems for this task. Each part has to be attached to a moving chassis. If the assembly moves slower than a pre-determined value (due to any reason), the parts wouldn't get screwed on at the appropriate places. The assembly line going too fast would result in similar mismatched consequences. Stopping the assembly line would turn out to be a costly operation. Therefore, the range of motion of the chassis teamed with the rapidity of the assembly line would allow for a window of opportunity to screw on/attach the parts onto the moving chassis.

Real-time systems can be classified into three sub-categories [16]:

Hard Real-Time Systems

For instance, an avionics weapons delivery system in which pressing a button launches an air-to-air missile. Missing the deadline to launch the missile within a specified time after pressing the button can cause the target to be missed, which will result in catastrophe.

Soft Real-Time Systems

E.g. An automated teller machine: missing even many deadlines will not lead to catastrophe failure, only degraded performance.

Firm Real-Time Systems

'A firm real-time system is one in which a few missed deadlines will not lead to total failure, but missing more than a few may lead to complete and catastrophic system failure.'

E.g. An embedded navigation controller for an autonomous robot weed killer. Missing critical navigation deadline causes the robot to veer hopelessly out of control and damage crops.

All applications fall somewhere within these definitions. Rather the demarcation amongst these types is a bit fuzzy. As illustrated in Figure 6,

The Real-Time Spectrum Shift (Soft, Hard and Firm)

In USA 'real-time' refers to on-line terminal services such as ATMs, database enquiry and on-line reservation and payment systems. Real-time system comprises of parallel or concurrent activities.

Embedded Real-Time Systems are those type of RTS that are based on specialized hardware and lack an operating system. An embedded RTS is called organic if it is not for a specialized hardware. It is known as loosely coupled/ semi-detached if they can be made organic with the re-write of a few modules.

Real-time Systems are usually event-based. An event in real-time software is anything that causes a change in flow. Real-time systems are always susceptible to change in their normal course of action. This is the reason why the notion of events is used to explain their functioning. Events can be of two types, viz:

  1. Synchronous-these are the events which occur at predictable intervals.
  2. Asynchronous- these occur at unpredictable events (caused by external sources).

2.10 Other Design Methodologies for RTS

Sommerville [16] proposes a design strategy for real-time systems which is event based. The stages are as follows:

  1. Identify the stimuli to be processed and what responses are desired for each.
  2. Then determine for each stimuli the timing constraints.
  3. Cluster the stimuli and responses together based on behaviour. With each such class associate a process.
  4. Identify or design any algorithms to carry out this set of concurrent processes.
  5. Design a scheduling system which will ensure that processes are started in time to meet their deadlines.
  6. Employ a real-time executive to integrate the system.

Sommerville further also indicates the general components of a real-time executive. He says that these components can be used fully or partially depending on the size of the real-time system.

The real-time executive has also to be set to accommodate two types of priority levels for real-time systems:

  1. The interrupt level-this is for those processes that need immediate attention on the part of processing. The processes handled by interrupts are usually the ones associated with the foreground processes.
  2. Clock level- this level is allocated to periodic processes.

The Arrangement of RTS-Components (pg 292 of [16])

Mani B. Srivasta et al [17] identifies two distinct phases in the real-time systems' design phase. Firstly, the high-level system specification is mapped onto a set of inter-communicating hardware and software modules. Secondly, these identified modules are generated using a mix of mapping, synthesis and library based techniques.

A UML based design methodology for Real-Time and Embedded Systems is presented by Gjalt de Jong [18]. He has then used O-O concepts to identify software components.

2.11 Design and Design only

The design of real-time systems prefers the K.I.S.S (Keep It Simple Stupid) principle. In addition to this point of view, Rob Williams [19] states, 'If you follow a good design technique, appropriate questions will emerge at the right moment, disciplining your thought process.'

Designing a real-time software entails five major phases[15], viz:

  1. Software Architecture Definition
  2. Once it has been decided that a real-time software is to be designed, a suitable architecture is chosen. Then the UML based use-case design is carried out, where the system is treated as a black-box and all the users (mechanical or human) are considered as actors. The use-case diagram then shows all the possible interactions amongst the users and the system.

  3. Co-Design
  4. Next, for all the hardware, decide what software functionality needs to be allotted to which processor and/or link. The aim is that the resources should not get overloaded; also the system should remain scalable. Similar modules ought to be placed nearby as this reduces delay and eases inter-process message communication.

  5. Defining Software Subsystems
  6. These are purely software based, compared to the design decisions mentioned beforehand, the choice of which depended on hardware considerations too. Determine all the features needed by the system. Group them together and consider whether any sub-system can be made to simplify the design. Also, identify the tasks (along with their respective roles) that will implement the features identified.

  7. Designing of Features
  8. Once the tasks and features have been done with, what needs to be designed is how the messages will interact amongst these tasks. Some of these tasks will be controlling the other ones by keeping a record of the activity of the feature. For this the concept of running timers is to be employed. The mechanism is such that timers are initiated to see the process of carrying out of events. If the desired event/task is carried out as per plan, the timer is stopped otherwise the timer times out and some recovery action comes into play. This recovery action is usually in the form of some roll-back activity. Message interfaces need to be specified in detail (all the fields and possible values).

  9. Designing of Tasks
  10. Once the tasks are identified it needs to be decided which state machine model will be employed to implement it. We are all aware that real-time systems are mostly state-based systems, so this decision is also vital to complete the design process. The state machine chosen can be either for single, multiple or complex tasks. Moreover the model can be either flat or hierarchical. This is also pivotal as here lies the basis of scheduling rules which are important for the smooth functioning of a real-time software.

Kopetz (1991) [22] present a methodology for the design of real-time systems by breaking down the process from specification to the fine-grained task, message and protocol-level. Mars is the underlying architecture for the proposed design methodology. All the aspects of an engineered hard real-time system are considered, viz: predictability, testability, fault-tolerance (fail-stop or fail-operational), consideration of the complete system (both hardware and software), system decomposability (from abstraction to smaller modules) and evaluability (through an early dependability and timing analysis).

Previous design methodologies include SDARTS, DARTS, SDL, SCR, MASCOT, EPOS, SREM and so on. None of these methodologies provided all the characteristics mentioned above so the authors came up with a novel methodology constricted with some basic assumptions. The design methodology is explained diagrammatically below:

The Basic Design Methodology Common to All

This entire process indirectly sets-up an off-line scheduling system which is the key to fulfilling the timing constraints for real-time systems. However, this indirect off-line scheduling does not suffice for predictable hard real-time systems and other issues also need to be considered simultaneously, viz: static and dynamic scheduling. Other crucial parameters include the following[22]:


  • MART (MAximum Response Time)
  • MINT (Minimum INTerval)


  • MAXTE (estimated maximum execution time)
  • MAXTC (calculated maximum execution time)


  • Validity (period of time for which data is valid or holds true)


  • Maximum execution time of the communication protocol between tasks.
  • CPU availability (CPU time usable for application tasks)
  • System overheads, i.e: task switching, periodic clock interrupt, CPU cycles used by DMA eg for network access and so on.


* MAXT (execution time: calculated based on a value extracted by all of the timing values given above)

The importance of testing design is also emphasized by these authors. They recommend a testing scenario where each component, cluster (open-looped cluster test and closed-loop cluster test) or task is tested once right after its design and once after the implementation is sought. As we know, the testing done at design has a lower cost (by all means) hence the method is commendable. Other tests mentioned are the system test and the field test.

Finally the authors mention a software, which supports design through MARS, called MARDS. How MARDS aids in the design process is explained through this paper.

Zage points out in a paper [20] that the design phase of a software has two major aspects, viz the architectural design and the detailed design. The author has drawn an analogy between designing a software and constructing a true-to-life model of a building. To assist in devising a better design or tracing out errors in the design already made, two metrics have been developed by the team at Ball State University.

Second is the internal metric, DI, that is based on information available after the detailed design, viz all the information given above as well as "any chosen algorithms or the pseudo code developed". This basically caters to the internal structure of the modules.

Then the author has come up with a composite metric to measure 'design quality' for a design G.

D(G) = De + Di

D(G) stands for the design quality,

CC stands for Central Calls,

DSM stands for Data-Structure Manipulations and

I/O refers to the external device accesses.

To the calculation of Di, even more crucial parameters could have been added, like 'cyclomatic complexity, nesting levels, knot counts, live variable counts and declaration-use pairs and so on.' However, the author was not in favour of it as he clearly quotes an important aspect of metrics wherein, trying to cover too many features via metric, muzzles up the result. Similarly, trying to associate just one feature with a metric, doesn't extract any vital information at all.

This metrics suitability to detect potential 'stress-points' and error-prone modules was then tested against a large-scale software (its suitability for a small-scale software had already been proved). When tested for a large one interesting results were drawn. It was noted that with the calculation of De and Di, the outliers were usually the stress-points. When the outliers were way too many that the standard deviation and the mean didn't have nay sync with each other, the author introduced a method of identifying 'X-less algorithms'. Once these were identified and marked straightaway as outliers, the rest of the calculations were performed on the remaining modules. These then provided with a better and evenly-distributed result, proving the importance of this metric.

In the end to prove the mettle of this metric even more, the author decided to test a part of it (Di) against a time-based metric called cyclomatic complexity V(G). It was found that Di served the purpose the best so far, even more than V(G). These results were then tallied with LOC-lines of code metric that was available after the generation of the code. As far as overall D(G) metric was concerned, it proved to extract 100% module-error concentrations for large-scale software.

The author also provides a solution to all the modules which do turn out to be error-prone, by saying that the design should be re-tailored or then the coding of such modules should be handed over to the most experienced programmers on the team or the testing-time on the modules should be increased.

However, I would like to point out here that if the solution lied in increasing the testing phase then why to go into such details trying to unfathom the problems at the design phase itself. We should be looking for a solution at the design phase and simply go into re-tailoring the design and running the metrics again.

Design metrics should be in conformance with the requirements set in the preliminary analysis phase. MDD is being used by real-time software developers to assure accurate real-time performance of complex systems. Using the UML (UML 2.1), developers are able to use the abstraction of models to address and understand complexity. The application is simulated to ensure algorithms perform properly; the model is then used to automatically generate real-time code that strictly adhered to design.

The chosen metrics should be able to follow the SMART test, wherein the chosen metric should be S=Smart, M=Measurable, A=Attainable, R=Realistic and T=Timely.

To be able to measure a real-time system, factors that need to be taken into consideration are as follows: timeliness, performance, reactive or not, accuracy, predictability, deterministic or no, maintainability, adaptability, robustness, efficiency, accuracy, reliability, cost-friendly nature, quality-assurance, performance, risk-management, complexity, reliability, fault-tolerance and performance etc.

Fohler et al (2002) [34] suggest a metric for control performance (QoC) based on real-time timing constraints. Only closed-loop control systems have been considered.

Timing constraints and values determine the performance and areas of improvement for such a system. They say that the temporal values are of two types: fixed (task periods-time delay and task deadlines-sampling period) and flexible (sets of feasible instance separation- extracted from the type of controller chosen and response times).

During the design many values qualify in accordance with the timing constraints. Deviations from these values lead to system errors. Two types of system errors are pointed out, namely the IAE and ITAE.

One of these errors is used to quantify the QoC metric. The QoC metric allows the decisions to be based on both temporal and control information.

A strategy has been devised to improve the performance of real-time closed-loop control systems, viz: instead of time values (fixed or flexible), choose a set of values (⟨instance separation, response time⟩) at the design stage and then hand it over to the scheduler to choose the correct/ closest value-set pair. Moreover, the ability to change values at run-time allows improvement of the system.

2.12 Steps to Attain Useful Metrics for RTS

Measurement. Lies at the heart of everything. Standards such as ISO 9000 and SEI's Capability Maturity Model uses metrics too.

As Grady (26) states, "Without such measures for managing software, it is difficult for any organization to understand whether it is successful." The paper by Linda Westfall (27) outlines "12 Steps to Useful Software Metrics."Basing our research on these steps, we outline twelve steps for software metrics of real-time systems. The discussion is presented hereforth.

Purpose of Metrics for RTS

Firstly, identify who has to use the metric. If the metric does not have a user for whom it is being produced, the entire effort is futile. For the software of real-time systems the users can be people related to functional management, project management, the end prosuct.

Secondly, determine which entity precisely needs to be measured? To aid in this process the 'goal/question/metric' ( 27) ( 28) paradigm may be employed.

Goal-Question Approach [28]

Thirdly, structure relevant questions, the answers of which will lead towards the goal(s) determined in the previous step.

Fourthly, for the real-time system software, formulate the objective according to the following formula(27):

The Basic Question [27]

Fifthly, hunt for standard definitions of the attributes and/or entities. If there aren't any available or if ambiguity hangs above them, then go ahead and define these things yourself from the perspective of real-time systems. There is a concise set of related definitions available in the IEEE Glossary of Software Engineering Terminology (29).

Sixthly, choose between direct measurement or indirect measurement.

Seventhly, if the measurement method is indirect, break it down to an atomic level so that it is clearly known that which entities need to be measured to achieve the desired goal.

Eighthly, define thresholds, variances, control limits and so. Be specific by giving percentages.and so on.

Ninthly, from the real-time perspective decide the method of reporting. As (27) states, "define the reporting format, data extraction and reporting cycle, reporting mechanisms, distribution and availability."

Tenthly, determine any additional qualifiers that will be needed if the metric has to be given a wider spectrum than the one allotted to it.

Eleventhly, we ought to be concerned about the collection of data. Since the metrics that have been devised here are concerned with the design phase of real-time systems, thus the collection of data is meager as compared to the entire life-cycle.

Lastly, keep the ethics alive. Make the selection of metrics, collections of data and intervening into professionals easy.



Real-time systems are an integral part of the world of computing. Every system can benefit from the research done in the area of real-time systems, as normal systems are special cases of RTS. If every system is 'gifted' with the added burden of timing constraints and all the other peculiar features of RTS, then each one of those could be called a RTS. Due to this, any work done on RTSs, can be beneficial to all the systems.

The work that this thesis entails revolves around the design phase of RTS (only the design phase of application software to be precise).The SDLC includes the following major phases, viz. analysis, design, development, implementation and testing. The software engineering cycle for RTS adopts a slight difference thanks to the crucial timing constraints that have to be met. There are different design strategies (already devised) which are discussed in the Literature Review section of this thesis (please refer for further insight).

With this difference in design methodologies, devising metrics (measurement methods) helps designers to choose and re-evaluate their designs well before implementation. The design phase is the longest phase of the lifecycle as it lends itself into the mid of analysis, as well as to the mid of development. It can also be called the spinal-cord or backbone of the entire SDLC as here lies the core architecture of the software which is to be developed. This makes it quite crucial too.

Metrics at the design phase have been proposed earlier, namely feature-point metrics, fuzzy-logic based metrics [35] etc.; but what is needed is a unified approach. If the problem can be located, via metrics at the design phase then we can save on investments (monetary, time and so on) and 'nip the evil in the bud.' This act of identifying problem areas early also helps in predicting the difficulties and complications, which may occur later in software development. Hence such metrics solve the problems of not only the design phase but of all the other phases as well.

Experts have put forward a number of software design metrics for real-time systems, and they are working and trying to propose new strategies and methodologies for measurements in this industry. As [36] points out that metrics have been traditionally classified into three levels based on measurement theory, namely : product, process and resource metrics. Product metrics are classified under architecture, runtime and documentation metrics. Process metrics are further classified into management, life-cycle and case metrics. Resource metrics are classified further as personnel, software and hardware metrics. Therefore, this research attempts to analyze such design quality metrics that can be applied to design phase of real-time systems in order to predict flaws and improve the quality of real-time software applications.



The areas enumerated in the previous chapter will be looked into, to devise metrics to assist in measuring solid results. Measuring Flow Control is vital in the design of real-time systems, especially for Performance and Availability measurement. The thing that is important is that the receiver should allow for the transmission delay which the sender process goes through. H. Kopetz [33] proposes in his paper that a buffer, implemented in hardware or software, can fulfil this purpose. He says that this has been done in the past by the implementation of low-pass filters. We will simply propose a metric which measures whether the buffer is accommodating all the events which come in, without putting any to waste, or sending them unignored or uncatered to.

This takes the form of a constraint, which if fulfilled can ensure the smooth execution of flow of events.

Mathematically this metric can be expressed as follows:

The size of the buffer should also be limited, i.e, it should not exceed more than the size of the program.

This metric can be helpful in avoiding overloads by sensors or event showers or both. It also helps in priority-based identification of events before-hand, so that nothing important goes missing.

During the design of real-time software knowing the upper -bound on the WCET (Worst Case Execution Time) helps to have Predictability in a system. The WCET value ought to be less than a certain threshold. The threshold is set according to a prediction made on the basis of knowledge related to experts working in the same area. If the 'device accesses' and the 'context switches' are low then the WCET will be met, as the discrepancies will be low.

Trend_of_WCET = No._of_device_accesses + No._of _context_switches

Mathematically, this can be represented as follows:

T(WCET) = N(Ad) + N(Cs)

where Ad = no. of device accesses

and Cs = no. of context switches

The measurement of the WCET (Worst Case Execution Time) or MET(Maximum Execution Time) of a RT-software can be based on different approaches depending whether the design architecture is event-based or time-triggered. If it is event-based then the calculation will be based on the choice of the processor [21] and other factors like the OS which is under use, which sort of pre-emptive scheduling strategy does it use or how does it synchronize tasks [22]. Moreover, it is difficult to compute the MET and WCET of tasks without taking into account its interaction with other counterpart tasks[23]. However in January 2009, Maaita, Adi Abdelhalim [30] has proposed techniques for enhancing the temporal predictability of real-time systems for a time-triggered software architecture.

The WCET can be estimated at the design phase of a real-time system. It is said that at this stage the estimate is not very accurate [ ]but nevertheless it is an important estimate. Traditionally, the estimate of the WCET requires the factors explained in the previous paragraph. However, a newer approach uses cyclic dependency between tasks, 'iterative convergence' and a 'probabilistic schedulability envelope' [24]

Basic Cycle Time is the time which serves as the lower bound for the real-time executive, contrary to the WCET which serves as the upper bound.

Hence, it goes without saying that the acceptable limit for a task is between the basic cycle of time and WCET.

A metric has been devised that measures the Basic Cycle Time, viz

Gp-1-Gp = B.C.T

where Gp-1 is the time taken for the completion of the predecessor event,

and Gp is the time taken for the completion of the current event under execution.

The measurement of the design quality of a RTS is of utmost importance too. We have come up with a composite metric to measure 'design quality' for a design G. This has been named as RTA Structure Metric.

Z(G) = Ze + Zi + Zt

and Zi = i1 (CC) + i2 (DSM) + i3 (I/O)

and Zt = in-1 (tn-1) - in (tn)

Ze tells the relative number of data transactions between modules

Zi gives an aggregate of the number of central calls, direct manipulations as well as number of inputs and outputs to and from the module.

Zt is the time taken for a module/event to be carried out.

Computing this metric straightforwardly can help us determine the design quality of a RTS. Khan et al. [31] has provided us with insight regarding the ambiguity associated with the term 'quality'. He states that quality is still a multi-dimensional and multi-faceted term, as it means different things to different people. The way quality is understood is a matter of perception. The identification of such markers which can aid in assessing quality of a system, can depend on many things including management objectives, goals set for the end product and the choice of the design of a system. Different quality attributes are related to each phase of the SDLC and it is elementary to decide the focus of study to a specific part. This is exactly what we have done. We have narrowed down the study of the quality metrics to that of the design phase (as this was our area of concern). Next, we have used parameters such as Zt, Zi and Ze (as explained above). These are using the temporal perspective as well as other vital information as fan-in, fan-out and so on.

The M.A.R (Message Activation Rate) and T.A.R (Task Activation Rate) are two entities that will be used as metrics. These rates will be determined by two parameters, viz the maximum deviation of the finishing time of two consecutive event instances, and the maximum deviation of the finishing time among all instances.

H. Kopetz [32] proposed in his paper, a methodology for fault-tolerance. Based on this methodology metrics have been derived. He states a Swiss-Cheese Model for fault-tolerance of a real-time system. He suggests that once something goes wrong with a system, it should be provided with back-up plans. The normal functioning, when it gets hampered should be self-healing. Even if the self-healing process fails, then there should be a strategy for never giving-up. Amidst all these diagnostic capabilities, if a system event that caused the problem hasn't been able to self-heal the system then it can be declared as a 'catastrophic system event.' A metric that measures the upper bound (UB) on the restarting of the system can easily help to identify the event that caused all the trouble in the first place. This event then can be either deleted or then be notified to the designer/programmer for suitable changes. This proposal of this metric can then be officially stated as follows:

If Restart Value > Upper Bound (prefixed)

Then Event == Fault

Redesign Event.

The Event here refers to the event responsible for the delay in the restart value.

Detection of the event responsible for the delay in meeting the Upper Bound, calls for a detailed analysis of appropriate detection algorithms, which is beyond the scope of discussion of this thesis.



Since real-time systems are becoming more persistent, it is necessary that software engineers have quantitative measurements for accessing the quality of designs at both the architectural and component level. The measurement process is to drive the real-time software measures and metrics that are appropriate for the representation of real-time software that is being measured. Metrics have a lot of characteristics in order to improve the design process and enhance the development process as well.


In this thesis, we have reviewed software metrics thoroughly. We also looked at the importance of measurement and saw why it was utterly necessary to concentrate on the design phase of real-time systems. We found out that by focusing on the design phase we could actually target and troubleshoot the problem areas of the entire life-cycle of real-time systems. We also deeply analyzed real-time design metrics proposed by different authors and experts in literature. Given below are results and discussions for all of these different devised metrics.


This metric caters to performance and availability criterion. It measures the time an event waits in a buffer.

This metric measures whether the buffer is allowing for the delay.

We take the ideal size of the buffer to be X.

Now we will put different values for the other variable parameters, viz Nb and N(e)b and see the trend in the result for T(e)b .

If Nb = 100 and N(e)b=100

T(e)b= 1

This indicates that it takes unit time (unity X=1), if the size of the buffer is such that it can accommodate all events equally, without having to make them wait in a queue.

If Nb = 100 and N(e)b=2

T(e)b= 0.2

This indicates that it takes 0.2 units of time (unity X=0.2), if the size of the buffer is too large as compared to the number of events that have to wait in the queue. This also indicates wastage of resources at the design phase. We can take these trend changes of values into account, to devise the optimum value for the size of the buffer and also to check whether the value of the size of the buffer at the design phase is optimum or no.



Metric: Measures the Trend of WCET by bringing into account the no. of device accesses and no. of context switches.

Background and Working:

This metric helps in giving an upper bound for the time allocated to the Real-Time Executive. The lesser the value of T(WCET), the lower will be the upper bound of RT-Executive.

Trend_of_WCET = No._of_device_accesses + No._of _context_switches

Mathematical Representation:

T(WCET) = N(Ad) + N(Cs)

For verification purposes, we take the value of the T(WCET) to be X once again.

Now we take different values for the 2 variables, viz No._of_device_accesses and No._of _context_switches.

If No._of_device_accesses=100 and No._of _context_switches=100

X=200 [A very high value of the WCET]

If No._of_device_accesses=100 and No._of _context_switches=2

X=102 [A moderate value of the WCET]

If No._of_device_accesses=2 and No._of _context_switches=100

X=102 [A moderate value of WCET]

If No._of_device_accesses=2 and No._of _context_switches=2

X=4 [An extremely small value for WCET]

Hence, we can conclude that as we vary the values of the 2 independent variables, the values of the dependent variable X varies accordingly. The trend in the variation can be seen above. Since this metric involves simple calculations (a simple '+' operator is employed).



Metric: Determines the lower bound of the RT-executive's time of operation.

Background and Working:

This metric measures the difference between the previous task and the current one.

Mathematical Representation:

To verify this value we take B.C.T once again to be X.

We find out that as Gp-1 takes values that are greater than Gp, then the value of B.C.T and X, tally with the expected results in the design phase. This is a constraint and what we expect to find.


Area of Concern: QUALITY

Metric: Measures the inflow, outflow, fan-in, fan-out, direct manipulations, I/Os, central calls and time-taken

Background and Working:

Provides an aggregate of the above mentioned features.

Mathematical Representation:

Z(G) = Ze + Zi + Zt

where Ze = e1 (inflow * outflow) + e2 (fan-in * fan-out)

and Zi = i1 (CC) + i2 (DSM) + i3 (I/O)

and Zt = in-1 (tn-1) - in (tn)

For verification purposes, we take Z(G) as X. Now we notice that for the value of X to remain predictable we should know the values of the dependable components. We will deal all the 3 components of X separately below.

Let the value of Ze be A.

Ze = A = e1 (inflow * outflow) + e2 (fan-in * fan-out)

If we get high values of A, this implies all the dependable parameters: inflow, outflow, fan-in and fan-out, possess high values. If we get low values, this implies that all the dependable parameters have low values. If we aim for moderate values, then we ought to look for values in at least one of the two dependable parameters to be low.

The same discussion holds true for the following equations:

Zi = i1 (CC) + i2 (DSM) + i3 (I/O)

Zt = in-1 (tn-1) - in (tn)

Following this discussion we come to know that X=A+B+C. The same discussion here also implies that the value of X can be moderated if we predict the values of the dependables: A, B and C with accuracy.


Area of Concern: TIMELINESS

Metric: Measures the activation rate of instances.

Background and Working:

Maximum deviation of the finishing time of two consecutive instances.

Maximum deviation of the finishing time among all instances.


Area of Concern: PERFORMABILITY (Meyer, 79)

Metric: Measures the activation rate of tasks

Background and Working:

This calculates the task activation rate and then computes the total task activation rate. If our system is to be designed keeping in mind this value then it can be handled for peak-load performance also.

Mathematical Representation:

For this metric's values we observe that the value of En should not be greater than En-1. Similarly the value of t, the independent temporal variable, ought not to be zero or our metric will lose value, as abnormal results will be produced.


Area of Concern: FAULT TOLERANCE

Measures: This metric measures the upper bound on the restarting of the system allows to find the faulty event in the design phase.

Mathematical Representation:

If Restart Value > Upper Bound (prefixed)

Then Event == Fault

Redesign Event.

For this metric, we note that it is already constraint-based. Hence no further inferences can be drawn. All we can say is that the restart value should be greater than the prefixed upper bound for the system to function as real-time.



Real-Time Systems have been an object of attraction for both researchers and developers alike. The reason is that Real-Time Systems are used at places where, even a small error can become life-threatening. A slightly delayed response from the pre-prescribed temporal value becomes totally unacceptable. Moreover, every system is a special case of Real-Time Systems. Both of these factors indicate the importance of Real-Time Systems.

Although this is such an important area, unfortunately not much work has been carried out to enhance the utility of real-time software products and to minimize errors early at initial stages of the development lifecycle of Real-Time Systems. Since Real-Time Systems are becoming deployed at a greater scale, it is necessary that software engineers have quantitative measurements for accessing the quality of designs at both the architecture and component level.

This thesis proposed eight metrics pertaining to the design phase, namely: Flow Control Metric, Trend of WCET Metric, Basic Cycle Time Metric, RTA Structure Metric, Message Activation Rate Metric, Task Activation Metric and Upper Bound Metric. Their details are mentioned in the previous chapter. We have also presented a detailed analysis of the literature currently available about Real-Time Systems. We also conclude that employment of Real-Time Design Metrics at the design phase is the only way to cut down on development costs as well as to predict flaws easily.



Measurement of Real-Time Systems has become widely known and very well-accepted in today's era. Every system is a special case of a real-time system. This makes the research presented in this thesis cater to a wide range of audience. It has been admitted that using measurements and metrics in software applications enables developers and designers to get better insight into the flaws and errors of the developing application well in advance. That is why the technique that we have proposed, of employing metrics at the design phase, helps in eliminating flaws really quickly and without making the cost of troubleshooting too high. It also indicates the probability of the issues which can appear at the later stages of the development cycle. It helps in classifying, analyzing and distributing different attributes of the real-time software application with less effort and time.

Real-Time Design Metrics have been studied and analyzed by many experts and practitioners in the past. Since these metrics have been seen to have shortcomings and limitations, we would suggest these issues to be resolved to get maximum benefits from these metrics. Many of these metrics lack any theoretical as well as empirical validation. This is the reason that they cannot be of any use to the industries. We also noted that, to make matters worse, some metrics had and still have ambiguous definitions, and some have very different interpretations for the same metric.

As future recommendation, we would suggest to remove this ambiguity and get unanimous standards granted. If properly implemented, real-time design metrics bring the promise of better and cheaper software. The most obvious extension to this research is to analyze the degree to which these metrics can correlate with indicators as software performance and design quality. Also the eight metrics that have been presented in this thesis, can be actually tested on live real-time systems and their results noted. These metrics, once verified by different experts in industry, can be adopted by real-time systems' industry gurus.

There is also need to congregate metrics of Real-Time Systems at different levels from requirement engineering to deployment. It would promote deeper understanding about the evolution of real-time applications and their complexity. Advances in any of the areas of real-time systems need to be correlated with devising metrics for that particular area. This strategy would be highly promising for research in the immediate future.

Please be aware that the free essay that you were just reading was not written by us. This essay, and all of the others available to view on the website, were provided to us by students in exchange for services that we offer. This relationship helps our students to get an even better deal while also contributing to the biggest free essay resource in the UK!