Many people have conducted research in the area of risk management and suggested different models and approaches to minimize and remove it. In this research survey four of the risk management models/approaches which found to be useful for risk management and this research survey will conclude how they can be indulged in requirement engineering phase with less cost and process overhead keeping in mind the Pakistani software market.
These models/approaches are:
a) A Flexible and Pragmatic Requirements Engineering Framework for SME. 
b) A Formal Risk Assessment Model for Software Evolution. 
c) Requirement Reliability Metrics for Risk Assessment. 
d) Managing Requirements Engineering Risks: an analysis and synthesis of literature. 
This framework  has been divided in to five classic phases of requirement engineering described above which are elicitation, analysis, specification, verification and validation and management of requirements. They have designed a set of practices and techniques for each phase and they define a practice as an abstract task that has to be performed in defined phases. A technique on the other hand is the way to implement or to do a certain practice.
Overview of practices can be seen in figure 3, as different practices are related to different phases and practices are also classified in to three sub classes which are Basic, Advanced and Context. In this approach they made a distinction between the concept of technique and practices and define a practice as an abstraction of technique to provide solid solutions. The documentation done to define a practice should be minimum as short as one page or a power point slide.
A practice example can be seen in figure 2
Figure 1: Practice Example
On the other hand the Techniques are the solid solution provider which carry out a practice and set goals on how to conduct a practice and a practice can be conducted by more than one technique.
Figure 2: Overview of Practices
This relationship between the practice and a technique is a many to many relationship and can better be illustrated with an example. The figure 4 shows the relation ship between the practice Elicit NFRs and Elicit FRs and the techniques Soft-goals and Stakeholder workshop.
The technique Soft-goals supports the practice Elicit-NFRs and also the practice Document rationales
, as the rationales for the NFR are documented when using the technique. The practice Elicit NFRs is not only supported by the Soft-goals, but also by Stakeholder workshop, which in turn also supports “Elicit FRs. 
It is not recommended in paper that it is better not to describe how practice can be performed that is why techniques are used and techniques describe possible ways of implementing them, but there should be sufficient information in tech. documents that the user can understand.
This framework  can be utilized for many purposes for example small and medium industries can use this as a tool to evaluate their requirement engineering process, they will need to define a set of practices to determine how they work.
In some cases it can also be used to solve a particular problem for example security issues. The biggest advantage of this framework is that it can be used in process improvement. The disadvantage of this framework are they are more suitable for medium sized organizations rather than small organizations and they are not easy to understand in Pakistani market as it is a bit complex and it will require a little cost.
1.1.2 Characteristics of model:
It  suggested a framework for small and medium organizations to implement the framework without having extra cost.
It is correlating importance of avoiding risks by implementing some practices using some techniques.
This model is easy to implement if software engineering knowledge exists in the organization and they want to improve things.
When we talk about evolutionary software development, we have to believe that difficult business and product requirement often change as project proceeds. Here we can not talk about predefined scope boundaries of the application to be built, but still we are bound with the schedule constraint. So how can we avoid the schedule and cost factor out? According to the author of this paper the answer is risk management through proper risk assessment.
The question arises of how we can analyze risks related to a specific type of project which has no final requirements established yet. We have to perform early risk assessment to be able to meet schedule and cost limits, but current early risk assessment is an shapeless problem which relies on individual’s decision and impractical assumptions such as unchanging requirements. To deal with this problem we have to make risk evaluation more planned and objective.
This problem can be determined by setting out practical schedule and performing a correct evaluation in terms of time and cost. More dependable estimations can be done using formal methods and objective assessment.
Several types of tools are used by the industry to estimate effort and time we can classify these tools in three classes 
a) Very early evaluation: Includes very early, crude and subjective estimation
b) Macro Models: Includes basic COCOMO, Putnam and function points after the requirement phase
c) Micro Models: Includes intermediate and detailed COCOMO, Pert/CPM and Gantt techniques.
The problem with these models is that they are not considering a very important factor of evolutionary software projects that is requirement volatility which is an essential nature of evolutionary projects.
Some other important factors were also ignored by these tools like personnel volatility, complexity of the project and productivity of the team working on the project.
AFRM model was build to tackle all the above problems, it was found beneficial especially for the evolutionary software projects. This model based on following metrics:
i) Requirement Volatility
It can be calculated as:
Requirement Volatility (RV) = Birth Rate (BR) + Death Rate (DR)
BR = (NR) / TR) * 100
NR = Number of new Requirements
TR = Total number of Requirements
DR = (DelR/TR) * 100
DelR = number requirements deleted
TR = total number of requirements
188.8.131.52 Complexity:A formal specification can be used to measure the complexity of the requirements. A requirements representation that supports computer-aided prototyping, such as PSDL, is useful in the context of evolutionary prototyping .
I will use a complexity metric called Large Granularity Complexity (LGC) that is calculated as follows:
LGC = O + D + T 
Where, O = Number of atomic operators (function or state machine)
D = Number of data stream (data connection between operator)
T = Number of abstract data type required for the system
1.2.2 Characteristics of Model:
Model is perfectly suited for projects which are evolutionary by nature.
The model enables a project manager to assess the possibility of success of the project very early in the life cycle of the project.
This model provides objective results rather than subjective results.
Requirements part of this research paper is feasible to calculate the complexity and Volatility of requirements and helps in finding the areas which needs focus and mitigation can be required at later stages.
1.3 Requirement Reliability Metrics for Risk Assessment:
There are several methods for calculation of quality, This  paper discuses a metrics based approach that can identify the risk factors and helps quantify requirements and helps in analyzing risks. This requirement reliability metrics offers risk management through an simple analysis of requirements versioning which it names Iteration and it divides requirements in four classes: Basically this paper follows the IBM RUP (Rational Unified process) for classification of requirements and they are as follows,
a) Completeness : Requirements that are fully defined and no change will occur in them
b) Weak: Requirements are not defined correctly and most probably change will occur.
c) Complex: Requirements that require certain level of complexity regarding technicality, development or design.
d) Ambiguous: Requirements that have two or more interpretations.
Now it further divides the requirements change history into three quality indicators and priorities are defined in three levels
a) Mandatory: Requirements that have the highest priority and must be met at all costs.
b) Evident: Requirements that have a low level of priority but should be met.
c) Frill: Requirements that have the lowest priority.
This paper provides a way to find software reliability by measuring the risk factor of different requirements classifications and gives a good picture of how requirements are maturing and what area needs attention.
Now in this table the iterations are the different versions of the requirements specifications and the numbers define the number of changes occurred in the different types of requirements, for eg in iteration 2 under mandatory 40 is written, this means that 40 changes were done after first version in mandatory requirements, Std is the standard deviation and Mean is the average. IV relates to Index of variation and it is a simple ratio of STD and Mean.
Similarly like this completeness this metrics is created for all the four types of requirements earlier defined like Weak, Complex and Ambiguous.
Now then we will have to do this for all types of requirements and then finally we will calculate the total risk factor associated with them, this will be done as follows but we have to know the abbreviations first to understand it.
AB = Ambiguous Requirement, CO = Complete Requirement
WK = Weak Requirement, CP = Complex Requirement
OV= Overall (AB+CO+WK+CP)
Table 1.1 Index of variation throughout the project
Now all the STD are calculated and Mean are calculated then we add STD, Mean and IV to get risk factor.
Risk Factor Graph
“The outcome of the evaluation of requirement iterations is based on data consistency among them. If the data consistency among these iterations is small and difference among them is high it means the quality attributes impacting on reliability are lacking and if the consistency between them is high and variation is low then quality attributes impacting on reliability has elevating influence “.
In the end of the paper the author sets a new dimension of getting Risk as follows:
Risk = Index of Variation – Reliability
Reliability is a relative term and depends on the organizations best practices and their own formulization.
1.3.1 Characteristics of the Model:
It provides objective numerical value areas of the requirement types and also suggested a formula to assess of overall requirement risk.
This model can be adjusted.
It correlates the risk value with the requirements index, which is an indication of the quality attributes that relates to the quality of the software.
It is less complicated and gives easy method to find the area which needs attention.
1.4 Managing Requirement Engineering Risks an Analysis and Synthesis:
This paper  first describes the different studies conducted for managing requirement engineering risks and then derives a contingency model but first we need to look at the framework used for literature analysis. This framework divides Requirement engineering risks in to three categories Requirements Reliability, Complexity and Availability and RE tactics into three areas which are Requirement Specification, Requirement Experimentation and Requirements Discovery.
The paper first deals with the risks associated with the Requirements Complexity and refer to complexity as quantity and structure of the information available to develop a new software product. There are many risks identified regarding the complexity in this paper and has been recommended that these should be taken in consideration before starting project for e.g. Relative project Size, Technical complexity, Number of links to existing system , Number of links to future system and need for new hardware or software etc.
Requirements Reliability refers to the dynamics of the data or information about the product to be designed and developed. These dynamics happen when the concerned stakeholder’s judgment changes because as we progress we tend to know more and better about the system and this may also happen because of some changes in internal or external circumstances. It defines an approach on how to make requirements reliable that is increase the reliability by combining continuous learning and systematic documentation.
Requirements availability can be defined as the communication gap among the developers and end users it mainly depends on the physical, cultural and conceptual differences among the developers and the end-users.
The model suggests different techniques from different researches conducted for each requirement technique such in Requirement specifications it suggests formal techniques like formal analysis, formal mapping and unstructured elicitation techniques.
For requirements experimentation it suggests Iterations, Observation techniques, structured elicitation and collaborative techniques and for requirements discovery it suggests Cognitive techniques like mapping and structured elicitation and group and observation techniques like surveys , interviews, email bulletin board, usability and observational studies etc.
There are different types of techniques defined later in the paper but they need specific process identification and extensive work so according to my scope of non process oriented organizations the techniques are done.
Then we come to the understanding principle of how do we better understand the tri factor of availability, complexity and reliability: the paper suggests that all of those can be handled by techniques mentioned above and risks can be avoided by taking actions according to the techniques.
The most important part is the prioritizing of risks associated with it, for e.g. if we consider a team which is concerned with resource risk inducts new resources to the team than this new induction causes a team risk of inducting new resources with in established team. So the basic principle is to understand the risk portfolio of the project and adopt a technique which tackles individual risks and do not impact others.
This  paper presents a contingency model for this purpose and it uses the McFarlae’s model as a template, this model represents a high/ low scale for requirements availability , Requirements complexity and requirements reliability so this takes us to 8 possible scenarios of requirements situations.
Figure 5: Relationship between different requirement engineering situations
Figure 6: Requirement Engineering Risk distribution
This takes us to divide projects into four categories which are High risk projects, Engineering projects, Routine Projects and Design projects. High risk projects face complex requirements despite the fact that at the same time dealing with the issues linked to the availability and reliability of the requirements information gathered. Projects that are rated as HI_HI_HI should primarily concentrate on requirements discovery to make sure that they make the right thing for the end users. The paper suggests that these types of projects should consider approaches based on mixture of experimentation and specification tactics
Engineering projects on the other hand have a complex set of reliable requirements and the availability of the requirements mostly never changes across the project lifecycle, they are considered as LO_LO_HI projects , there are very less risk of understanding what to build but the high complexity risks emphasize that project integration and control is done wisely.
Now we discuss the Design projects and as the basic requirements are simple but there are very severe risks associated with the availability and reliability of the requirements gathered. The primary goal of this project is to design practical software. In table they are described as HI_HI_LO. These type of projects should also work on discovery tactics as they need to interact with the end user and get things right and set a mechanism to validate what they have gathered.
Finally we discuss Routine projects, these are the projects that are labeled as LO_LO_LO projects which makes them quite an easy task, requirements are available and remain stable and developers understand what the task is, no special attention is required for these types of projects. Straight forward approaches are best for this type.
Characteristics of the Model
It is easy and cost effective and we only have to assess the requirements and rate them as high /low and categorize them in order to better manage risks.
This does not require much time and can be done at the start of the project.
This can help us identify the types of risks at very early stage.
- Mark Keil, Paul E. Cule, Kalle Lyytinen, and Roy C. Schmidt: A framework for identifying software project risks.
- Amir Akhter Jamili: Requirement Reliability Metrics for Risk Assesment
- Juan C. Nogueira, Luqi, Valdis Berzins and Nader Nada: A Formal Risk Assessment Model for Software Evolution
- Thomas Olsson, Joerg Doerr, Tom Koenig and Michael Ehresmann: A Flexible and Pragmatic Requirements Engineering Framework for SME
- Juan Carlos Nogueira, Luqi and Swapan Bhattacharya: A Risk Assessment Model for Software Prototyping Projects
- Say-Wei Foo and Arumugam Muruganantham: Software Risk Assessment Model
- By Mira Kajko and Janaa Nyfjord: State of Software Risk Management Practices
- Lars Mathiassen, Timo Sarinen, Tuure Tuunanen, Matti Rosi: Managing Requirements Engineering Risks an Analysis and Synthesis
- Borland: Mitigating Risk with Effective Requirement Engineering
- Don Gotterbarn and Simon Rogerson: Responsible Risk Analysis for software development: Creating the software development impact statement
- Keshnee Padayachee : An interpretive study of Software Risk Management Perspectives
- Dipak Surie:Evaluation and Integration of Risk Management in CMMI and ISO/IEC 15504
- C Williams: The CMMI RSKM Process Area as a Risk Management Standard
- Roger S. Pressman: Software Engineering A Practitioner’s Approach