integrating software security into the software development life cycle (sdlc)
TRANSCRIPT
-
8/2/2019 Integrating Software Security Into the Software Development Life Cycle (SDLC)
1/23
Integrating Software Security Into The
Software Development Lifecycle
Daniel Owens
SystemSecurities
San Diego, CA 92123, USA
Abstract
Applications make up the core of any system--for example smallapplications serving critical roles (e.g. Basic Input/Output System); wordprocessors; firewalls; e-mail servers; and operating systems--and, as a result,applications must be written both in a secure fashion and with security inmind or they may become the weakest link, allowing the circumvention ofvarious physical and logical access controls. Currently, many inmanagement are unsure of where to integrate security in their programs,unsure of the impact to cost and schedule, and unsure of how to buildsecurity into their programs. Furthermore, many designers and developersare too worried about making a product work and tight deadlines to thinkabout security if they are not specifically directed to do so. Thanks to theabove, many times installation and operation documentation neglect securityand the measures that the administrator, operator, and owner of the softwareshould take to ensure that the application is secure.
In this vein, this work helps all members of a project understand wheresecurity should be integrated into a general software development lifecycle,illustrates why security should exist at these points, and shows how to keepsoftware safe no matter the stage in the lifecycle. By separating the lifecycleinto responsibilities as well as incorporating the stages, all personnel willunderstand both the responsibilities of all other personnel (as well asthemselves) and when to perform their responsibilities. This work also helpsall personnel to understand why each responsibility is important, how to properly execute that responsibility, and what the impacts may be to the project if something is not performed or is given an inappropriately lowamount of attention.
-
8/2/2019 Integrating Software Security Into the Software Development Life Cycle (SDLC)
2/23
2
1 IntroductionThere are currently numerous software development models--for example iterative development, incremental
development, agile, waterfall, spiral, and extreme programming (XP).(Lethbridge and Laganiere 2005)(Seacord
2005) Recently, the United States federal government and several major software development companies have
created secure software development models or attempted to introduce security in existing models, however they
often fail to provide proper information for either the management or the designers and developers to actually
implement the proposed models.(M. Howard 2005)(Defense Information Systems Agency 2008)(Software Process
Subgroup of the Task Force on Security across the Software Development Lifecycle 2004)(Security Across the
Software Development Lifecycle Task Force 2004) Moreover, the current works often neglect to give complete
instructions to the various personnel to whom the work is addressed or fail to give enough reason for adopting the
proposed security measures. In some cases, the works are designed with websites and the underlying scripts or
interpreted applications (C# and Java) and do not adequately identify the measures and implications that an
application, as defined in a more general sense, and its development team would need.(Defense Information Systems
Agency 2008)(Defense Information Systems Agency 2008)
Figure 1 - A general lifecycle (Owens, Lauderdale, et al. 2008)
More recently, however, the National Institute of Standards and Technology used a general lifecycle that can be
implemented by almost any project for almost any reason.(Owens, Lauderdale, et al. 2008) The lifecycle involves
six phases, none of which can be skipped, but which can be restarted any number of times. The lifecycle, shown
above, starts at the requirements phase (where requirements are gathered), and then moves to the architecture phase
(where the architecture is developed), to the design phase (where the actual design is created), to the implementation
phase (where the programming is performed), to the test phase (where the application is heavily tested), and then to
the deployment phase (where the application is deployed). If a bug is discovered after the application is deployed,
we repeat the above process, which bares resemblance to all of the aforementioned models. The details of the
various phases are examined on a role-by-role basis below.
2 Application Program ManagersApplication program managers (APM) are key personnel who oversee the management of the development of
an application. These persons are responsible for setting the budget, scheduling, labor, and keeping a development
process on track. They give designers and developers high-level blueprints and goals using budgets and scheduling,
so they are very influential when it comes to the priority and role of security in the software development lifecycle.
Requirements
Phase
Architecture
Phase
Design
Phase
Implementation
Phase
Test
Phase
Deployment
Phase
-
8/2/2019 Integrating Software Security Into the Software Development Life Cycle (SDLC)
3/23
3
Without the support of upper-level management, in particular, program managers, security will likely suffer under
the auspice of greater functionality or performance; however, if upper-level management supports a strong emphasis
on security during the development lifecycle, it is likely that security, functionality, and performance all increase.
Most often, management misunderstands security because designers and developers are concerned with
functionality and none of the parties understands how to securely design software, as well as what software security
is and is not. A secure application is one that is properly designed, coded, safeguarded during all phases ofdevelopment and rollout, comes with adequate documentation to securely deploy the application, and is properly
decommissioned when the time comes for the application to be replaced. While the APM should be involved in all
phases of the lifecycle cited in Section 1, they are most influential during the requirements and architecture phases
because these phases are less technical in many ways and allow the APM to influence the direction of the program at
the policy level. To best create policies, guidelines, and standards an APM must have a broad grasp of the security
concerns facing applications today and the cost and schedule impacts therein.
2.1 Inviting Security Into The Requirements PhaseThe requirements phase is crucial to creating a support structure upon which an application can be built
securely. It is important for an APM to take the lead and set forth the security requirements that the application
must meet, as well as to properly determine what other requirements an application must follow. The APM must
decide what the application will do and hold to properly determine any legal requirements (e.g. the Health Insurance
Portability and Accountability Act of 1996 and the Sarbanes-Oxley Act of 2002). The requirements may include the
protection of sensitive, private, or classified data and, if there are multiple clients, may actually fall under various
guidelines and standards so it is important to fully understand the various requirements. If, for example, the
application is to be used by the Department of Defense (DoD) to carry sensitive information, it must purge all
memories of that information once the information is no longer needed (Defense Information Systems Agency 2008)
and if that data is health information, the application must also adhere to the Health Insurance Portability and
Accountability Act of 1996 (HIPAA) when handling such data. By properly scoping and examining the possible
security requirements that an application will need to adhere to, proper architecture and design decisions can be
made, averting serious design flaws that could otherwise doom a program.
The client APM has special duties not necessarily held by the developer APM. The client must make clearduring the requirements phase all of the requirements that are expected of the application. The client is most
effective when the requirements are clear, concise, and thoroughly conveyed to the developer during the
requirements phase. Clients should use the power available to them to ensure that the requirements include training
requirements, proper protection of source code, adherence to any requirements incumbent upon the client to use the
software, secure coding practices, adherence to coding standards, and testing the application for security flaws.
Furthermore, the client should require that all members of the development team, to include management, testers,
coders, and designers, receive regular application security training. By requiring management to take some sort of
application security training, the client is able to send a signal to the developer that security must be integrated into
the development process and is important to the client.
2.2 Inviting Security Into The Architecture PhaseThe architecture phase allows the developers to fully understand and appreciate the architectures upon which
the application is to execute. If a complete understanding of the architecture under which the application will run is
not attained during this phase, it is impossible to ensure that the application is deployed securely, let alone
developed to take advantage of the environments security measures. Thus, an APM should use the architecture
phase to clearly state the various environments that the application is to operate in. The APM should ensure that the
operating system, memory constraints, processing power, networking environment, and other factors are all
reviewed during this phase.
-
8/2/2019 Integrating Software Security Into the Software Development Life Cycle (SDLC)
4/23
4
The client has the duty to ensure that any operating constraints are acceptable in the clients environment and
that all assumptions are valid or viable. For example, the client should not be expected to add thousands of dollars
worth of equipment unless the equipment is extremely outdated and should be certain to properly convey what
assumptions are acceptable in the architecture phase. Because the application should leverage the architecture, it is
best to ensure that the architecture is well understood and well defined, even for general applications such as
operating systems.
3 Application DesignersApplication design flaws are considered the source of about half of the software security problems.(Software
Process Subgroup of the Task Force on Security across the Software Development Lifecycle 2004) Other problems
are because of requirement and implementation flaws, but design flaws still outnumber all other types of security
bugs. As a result, it is important that designers take careful heed of the requirements, architecture, and
implementation considerations during the actual design phase. At this point, the designer should decide upon the
appropriate language to be used and create a set of programming standards and guidelines that can be enforced.
These documents should include a list of explicitly denied libraries and calls because of the danger that the calls
could create if not carefully programmed or abused. Appendix C Sample Unsafe Function List has numerous
examples from various languages that may be used as a base point by developers. Additionally, the designer mustcarefully examine the requirements and architecture to ensure that the application can leverage the security afforded
to it by the architecture, meet all requirements, and have a design that both lends itself well to the chosen language
and is secure in design.
3.1 Inviting Security Into The Design Phase3.1.1 Choosing A Language
Designers should also carefully examine the requirements and architecture so that the most appropriate
programming or scripting languages may be chosen. For example, if the architecture and requirements place the
software on an embedded device, it may be less wise to use PHP as compared to C or assembly. While such an
example may seem exaggerated in what choices a designer makes, it illustrates how simple the choice is once
security is injected into the process of choosing a language. While no language is necessarily less secure given its
grammar, often the implementation of the language, as well as the power behind the language creates a seeming
power vacuum. For example, it is generally easier to accidentally overflow buffers in assembly as compared to C,
just as the same is true in C as compared to C++. As such, designers should recognize that there are dangers
inherent in the power afforded both designers and developers by a given language or the implementation thereof.
Designers should recognize that although a masterful assembly developer can create a very fast, small
application, the language does not lend itself well to multi-threading, making interfaces, making large programs, or
being well-written by those who have not been using it for many years. Such matters affect both security and
functionality. If an assembly language program is written by a novice, it is likely to be neither faster, nor small as
compared to the assembly code that a C or C++ compiler would have generated, thereby taking away the two
biggest reasons for using assembly instead of using C or C++. In fact, the code is much more likely to contain
various logic errors and other bugs if written in assembly, especially if there are multiple programmers--even if they
are all very experienced assembly programmers. Part of this is because it is more difficult to create coding standards
that go beyond whitespaces and comments for such a low-level language.
Another reason programs written in lower-level languages are likely to contain various logic and design errors
is because differences in style and thinking have a greater impact in a lower-level language because the compiler,
interpreter, or assembler in lower-level languages does less to the code prior to the code becoming machine-level
instructions. As one moves up from low-level, to mid-level, to high-level programming languages and then into
-
8/2/2019 Integrating Software Security Into the Software Development Life Cycle (SDLC)
5/23
5
scripting languages, the compilers and interpreters equalize code more--thus reducing the programmers ability to
make code faster without using tricks, but also reducing the amount of knowledge and skill required to program in
the language.
A side affect of much of this is that the higher the language is, the more likely that users will need to run
additional software to actually execute the code, which decrease performance and places additional applications that
can become attack vectors. For example, assembly, C, and C++ to compile into machine code and do not requireadditional software. In order to run a Java application, one must have the Java runtime environment, which is small.
In order to run PHP, one must have the much larger PHP interpreter installed and likely have a web server hosting
the PHP pages (although the latter is not a requirement). Since we see an increase in what is required to execute the
application, we also see an increase in our surface of attack. Additionally, the higher the language, the less likely
that the programmer can do things--that are often insecure anyway (Seacord 2005)--to increase the performance of
the application.
Designers should remember there is no single language that can fit all projects, but that the choice of language
influences the coding standards, the practices, the required skill sets of the programmers, the required experience,
and can influence both security and functionality. Moving too low may introduce excessive bugs, while moving too
high may actually increase the surface of attack, as well as decrease performance and functionality. (Seacord
2005)(Howard and LeBlanc 2003) The decision will affect the complexity of the coding standards, the performance
of the application, and the overall security of the hosting system.
3.1.2 Creating And Enforcing Standards And GuidelinesAfter choosing a language, it is important for designers to create a series of standards and guidelines prior to
anyone even beginning to write code. It is important that the designers tailor the standards to the language or
languages that were chosen in 3.1 and that they provide both standards and guidelines to the developers prior to
entering the implementation phase of the lifecycle. Standards will provide the developers enforceable rules by
which to program the application and guidelines will provide best practices that should be followed to produce the
best code possible.
Often, when developing standards, security is never taken into consideration. While the standards often dictate
things such as commenting appropriately (and defining this), file nomenclature, how variables are to be named, and
other such details, they often lack the many tenets of building secure software. For example, standards should
provide an unsafe function list from which deviations require waivers and are taken very seriously. While it can be
argued that most functions are safe and only unsafe if improperly used, many functions in Appendix C
Sample Unsafe Function List are there because they are either too easy to use improperly, too hard to use properly,
often misused, or have been behind many previous software vulnerabilities. The list should be kept as a living
document, incorporating functions that are commonly producing major vulnerabilities in the current project or
previous projects led by the team. For example, if during the lifecycle it is discovered that the software often suffers
from overflows becausestrcpyis used and the function can be replaced, the team should place it in the unsafe
function list and disallow further use of the function without first requiring a waiver and intense peer review of the
code requiring the waiver.
Standards should also include the dos and donts of security. For example, only thread-safe and reentrant
functions may be used in multithreaded code. While this may seem an obvious step to many, a developer who is not
keen on multithreaded programming or who does not realize that the code is multithreaded can overlook it. A
simple method of enforcing this would be to not mix multithreaded code into a class with non-multithreaded code
and have the multithreaded classes be clearly identifiable in both the design documentation and the code. When the
code is properly separated as described above, simply searching for non-reentrant or multithreaded unsafe functions
in the appropriate classes allows for quick and speedy discovery of some of the most common reasons for race
conditions.
-
8/2/2019 Integrating Software Security Into the Software Development Life Cycle (SDLC)
6/23
6
Additionally, the standards should make it very clear that the developers are not to deviate from the selected
programming languages and to not mix languages in a given module or class. For example, C code such asstrcpy
andgoto should be banned from all C++ files, assembly from all C files, and Java files should never contain any C
or C++. Additionally, the modules themselves should contain only a single language. This ban will increase
performance, decrease compilation time, decrease code complexity, enforce loose coupling, and in many cases help
make code more secure. Some of the insecurities that crop into machine or byte code are actually the direct result of
how compilers deal with or handle the mixing of languages. By separating the modules, it is less likely that the
languages will be accidentally mixed in a given file.
Guidelines, while allowing more flexibility than standards, show the developer how to properly code for the
project. As such, the guidelines (and standards for that matter) should only illustrate good, clean, secure coding
practices. C++ guidelines or guides should not contain double pointers in C format (e.g. char**), but instead use
arrays, C guidelines should not containgoto, and PHP or other website language guides should properly validate any
user input. By illustrating what to do, the guides and standards serve to educate developers and so should be given
the utmost attention by expert designers and developers with attention paid to both detail and security.
3.2 Additional Design ConsiderationsWhile taking the above steps can solve many problems, there exist still additional design considerations that can
influence security before a single line of code is written. These problems may be the applications level of trust is
not layered; a poor choice of algorithms; bad third party libraries were chosen; a bad compiler was chosen;
application boundaries are poorly drawn, protected, or incorrect; relationships between the boundaries are incorrect;
or various other design flaws. Some flaws, such as a bad compiler, are easy to solve. If, for example, a C++
compiler later turns out to not follow proper standards or is not performing as intended, it can often be replaced or
updated with one that does not have such issues. Other design flaws may actually require a redesign of the
application itself--if not the complete abandonment of the application or certain functionality therein.
There exist numerous cryptographic algorithms, each with an intended purpose. Some algorithms provide
streaming cryptography, while others provide block cryptography; some provide data integrity, while others provide
data confidentiality; and some provide speed, while others provide superior strength. If the application provides
either a client or a server capability and must be able to operate with other clients or servers, it may be difficult torecover from a poor algorithm. While many older networking technologies illustrate this (e.g. LAN Manager and
NT LAN Manager Version 1), a more recent example is the Wired Equivalency Protocol (WEP). When WEP was
first implemented, the underlying hardware was build such that it could quickly and easily handle WEP, but once
KoreK broke WEP (and proved that RC4 could be quickly and efficiently defeated) using an attack that has since
been improved, many of those devices were unable to be updated to the replacement algorithm--Wi-Fi Protected
Access (WPA).
Because of poor hardware design and a lack of planning on the part of the networking device manufacturers,
many devices can only use WPA with TKIP--which also relies on the broken RC4 algorithm--instead of WPA with
the Advanced Encryption Standard (AES). While the problem may appear to have been the hardware designers
fault, even in this case it can be traced to multiple failures. First two failures were in the design: the algorithm
designers failed to create a strong algorithm, while the protocol designers failed to choose a strong encryptionstandard when one was available (instead choosing the speed of the stream cipher). The next two failures were a
mixture of design and implementation: the software developers failed by making application specifications too tight
to accommodate change and the hardware manufacturers failed to properly plan for changes that may need more
speed or memory, instead using the minimum requirements specified by the software developers.
This same problem played out again when Microsoft unveiled its latest workstation operating system--Windows
Vista. Microsoft allowed many hardware manufacturers to claim that the hardware was built to run Vista (of various
flavors), when in fact the hardware was far too underpowered. This would later anger many consumers who saw
-
8/2/2019 Integrating Software Security Into the Software Development Life Cycle (SDLC)
7/23
7
Vista as slow, cumbersome, and little better than Windows XP in looks even though Vista (on a properly powered
machine) is none of those things. (Hansell 2008)(Lohr 2008)(Espiner 2007)(Poeter 2008)
4 Application DevelopersWhile design and requirement flaws are difficult to alleviate, an application that is poorly written or that was
not written with security in mind may prove next to impossible to fix. If developers program recklessly or with little
skill, the application may be so rot with flaws that it must be developed from scratch. Some of these flaws may be
simple and others may be complex or contain egregious logic flaws. Additionally, rogue developers present an
undeniable and often overlooked threat to the security, stability, and functionality of an application. As such, it is
important that the development team be keen on both secure programming and how to keep the source code safe.
4.1 Developing A Faulty DeviceThe loss of integrity means that a faulty device has been introduced. This device may be a malfunctioning
processor, bad memories, electromagnetic noise across the wire, a bad network interface card (NIC), or even a
malfunctioning or malicious object (e.g. user, software, hardware). According to the Byzantine Generals Problem
(Lamport, Shostak and Pease 1982), while it may be possible to detect that integrity has been lost and that a faulty
device exists, without integrity checks (e.g. checksums) and/or additional resources (e.g. monitoring ever bit for theflip using network sniffers and memory snapshots) it would be impossible to deduce which device is faulty. As
such, it is important that software be able to provide these checks on any sensitive data and all boundary data--any
data that crosses a boundary either as ingress or egress data. While it may be likely that media through which the
data passes check for data integrity, these checks would only discover data changes made while passing through the
media, leaving cases where data is modified prior to entering the media or after the data passed through (e.g.
changed by the application itself because of a race condition, a faulty processor, or bad memories). As such,
applications should guard against the loss of integrity, which can take two major forms: a complete loss or
misplacement of data and data manipulation.
4.1.1 Complete Loss Or Misplacement Of DataData integrity may be lost by misplacing or completely dropping data. For example, if a network protocol
requires the message field contain the hostname first, the type of response desired (e.g. an Internet Protocol address
or Mail Exchange address), and the class, but the application sends the data in a different order, the application has
now broken the protocol. (Mockapetris 1987) The protocol described above is the Domain Name Service (DNS)
protocol and the corresponding DNS query. When an application does not adhere to the protocol, the receiving
computer may or may not be able to decipher the true intent of the message.
Recently, the author worked on a major United States Air Force project known as Common Link Integration
Processing (CLIP). The software was developed by multiple contractors and subcontractors and was to serve as
translation software of sort--taking packets from various data links and then transmitting them across other data
links in the appropriate format. As of the time of this writing, numerous Trouble Reports (TR) existed against CLIP
whereby the software would not properly populate the translated packets, such as CLPV200002307. In this TR,
CLIP did not properly populate the TN Addressee field in J12.4 Response to Order (WILCO) messages on JREAP-
A (a tactical data link type used by the military). The observed behavior was the field being set to 102 instead of the
address of the intended recipient. In this case, remote units would never finish processing the message because they
are not the designated addressee (unless such a system was to happen to exist in the distributed application). The
misplacement of data--putting 102 in a field instead of 1--led to a WILCO never being processed unless there
happened to be a system with the address that is improperly placed into the field. If such a system existed, it would
attempt to process the remainder of the message, rather than discard as the others did. What the addressee would do
depends on the data and numerous other factors, but at best, the addressee would eventually discard the message and
-
8/2/2019 Integrating Software Security Into the Software Development Life Cycle (SDLC)
8/23
8
at worst CLIP would trigger a NULL pointer or similar crash in the addressee, resulting in the loss of a segment of
the distributed computing system.
CLIP also contained errors where data truncation occurred (e.g. CLPV200001940), cutting off data that was
supposed to be transmitted. Because there were no message integrity mechanisms in CLIP, it was difficult to tell if
the data were purposefully left off, if it were cut off, or even that it were transmitted (or existed) in the first place.
The receiver was led to believe that the message was complete, when in fact it was not. For example, the message,Attack target Tango-Delta-One-Niner and then change your course to 115813. Intercept the tanker there and refuel
before going to Rammstein and provide air cover, would be truncated to Attack target Tango-Delta-One-Niner
and then change your course to 115813. Intercept the tanker there and refuel. The receiver would not know to go
to Rammstein and provide air cover and would have to ask for the next mission or may make some assumption, such
as return to base. This flaw could--at best--simply increase network traffic, if the sender resends that particular
question (the increased traffic would be the cost of the original text plus another header), slowing the time in which
the question is answered. This makes a difference on a slow network, which is why protocols such as the Hyper
Text Transfer Protocol (HTTP) allow the requesting of multiple files in a single query. The flaw could also result in
an affirmative response by the receiver, who does not have the complete message. Even more important would be if
the next message from the sender tells the receiver to Then return to base. If such were the case, the receiver
would attack the target, change course, intercept the tanker and refuel, and then return to base, never traveling to
Rammstein to provide air cover as intended. This results in the potential loss of life if such a scenario were to
materialize.
Another, perhaps more disturbing integrity issue observed in CLIP occurred when data was created that was not
there before. CLPV200001817 serves as an example of this. When CLIP created a certain JREAP-A message,
CLIP always appends sixteen extra bits to the end of the message. During testing, the Verification and Validation
(V&V) test team noticed that the appended bits were hexadecimal zero. The amount of data is half a word long (one
word equals 32 bits on a 32-bit processor and there are 8 bits in a byte), which is a large amount of information (2
characters or a virtual address). To application security professionals, the major concern was the question of where
this data came from. Developers, on the other hand, were most concerned with the question of the effect that this
extra data had on a client or if filters will drop the packets as malformed. To application security professionals and
developers, the question of whether or not CLIP could accidentally terminate (likely in a failure) was a big concern,or if CLIP created what is equivalent to an off-by-one buffer overflow on either the middleware or endware. If
CLIP overflowed the buffer, this may have been a math error, an integer overflow, another overflow that had already
occurred, or because a malicious backdoor has been inserted into the source code.
While it is most likely an accidental mathematical error in this particular case, an attacker could insert such
mistakes in an attempt to slowly leak sensitive data to a monitoring and untrustworthy user--thereby using CLIP
as a Trojan. The user may be an application--perhaps a host--that has been designed to gather the extra sixteen bits
and is slowly putting the pieces together, or a malicious operator. The data may even be indicative of a time bomb
put into the code such that when the developer is fired, he or she can send out a single signal; in a distributed system
such as what CLIP creates, this signal would slowly spread and wreak havoc on the network. As such, the data
fields should have had integrity checks. In CLIPs case, the problem was twofold: the heavy use of C code in a C++
application led to direct memory manipulation and pointers (which in a complicated application is bounded toeventually cause problems) and the use of code that clearly violated the coding standards set forth by the designers.
The development team made heavy use of junior programmers, subcontractors who were not required to use the
primary contractors standards and third party code that was never examined for correctness and tested to ensure that
the code functioned properly.
4.1.2 Data ManipulationOn July 20, 2008, Amazons S3 network experienced a problem that eventually required the entire S3 network
to be shutdown and every system on it to be reset. The S3 network is a giant distributed system that provides data
-
8/2/2019 Integrating Software Security Into the Software Development Life Cycle (SDLC)
9/23
9
storage capabilities to numerous other corporations and spans across two continents. The system is a lot like the
distributed application that CLIP is a part of, only simplified, with far fewer functions, and less critical goals. The
S3 is well protected and all data leaving the network has integrity checks, as does all data passed in the system
except for one single message: the system state message. That morning, a faulty device flipped a single bit in the
devices report of the system state. The integrity failure was transmitted to all of the neighbors in the system and an
endless loop of chatter began between the neighbors and eventually between all nodes in the distributed system
trying to figure out the real state of the distributed system. The chatter became so strong that almost no other
processing or communication took place and the entire system had to taken down, the state had to be cleared, and
then everything had to be brought back up and synchronized. No networking protocol checksum (e.g. TCP and
UDPs checksums) could have caught this loss of integrity because it happened prior to the encapsulation but the
single bit being flipped resulted in the complete loss of service for eight hours. It took about two and a half years for
a device to fault such that an incorrect--but valid--system state was transmitted, but all that it took was a single bit in
the state to be manipulated such that it was no longer correct and the entire network had to be shutdown. (The
Amazon S3 Team 2008)
Again using CLIP as an example since it suffered similar bugs that resulted in it sending out bad information
because CLIP provided no data integrity checks outside of those already performed by the platform--assuming that
the platforms have checks on network packets. If CLIP were to modify data either because of a programming error
or a bug (or because of a malicious attack), the modification may never be caught, but certainly not by CLIP. For
example, CLIP is known to have programming and logic errors that could create an intentional or unintentional data
modification, such as double deallocation. If a pointer were to be deallocated twice, the consequences would vary
but could include exploding maldata across the entire heap and even destroying the systems stack. Furthermore,
CLIP had numerous buffer overflow vulnerabilities such that the overflow could modify other data in memory (there
are no real protections by the Integrity RTOS or Fedora under which CLIP was designed to operate that would
prevent an overflow), which may then be transmitted across the network. The modification may just be a single
byte, but as the above example from Amazon shows, even a single bit can create havoc--especially in a distributed
application. Single byte mutations, commonly referred to as off-by-one overflows are common in applications but
have proven particularly deadly in networking applications because of the amount of packets that they process.1
Applications such as CLIP also rely heavily on inter-process communication (IPC), making it plausible that databe mutated during the IPC handshake--perhaps because of an overflow of the buffer underlying the IPC messaging
protocol or some similar problem. This sort of issue may not be detected, especially ifperrorand errno (or similar
functions and variables in other languages) are used as they may introduce race conditions that result in the error
flag being switch to no error or the error descriptor flag being switched to some other error description. If a problem
were to accidentally occur, such as a single bit get flipped making a NULL terminator change to something else, the
result could be devastating depending on the language used (e.g. C) if the data is not properly checked for
correctness and integrity problems. IPC can also suffer from underflows, deadlock, livelock, starvation, and other
issues, which can result in data manipulation or the loss of data.
4.2 Inviting Security Into The Development PhaseDevelopment is the culmination of multiple activities to form a single phase, perhaps more than in the previous
phases. During the development phase, code is produced--often using multiple people--and then somehow
managed. The code should be well written, with careful thought to the methodology, adherence to all requirements,
design documentation, coding standards, and protocols or standards implemented by the application. Additionally,
1For examples, please see
http://arts.kmutt.ac.th/lng104/Web%20LNG104/3rdWorldHacker/howto/DosAttack.htm (Nestea),http://vulnerabilities.aspcode.net/20928/Off+by+one+buffer+overflow+in+the+parse+element.aspx,http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=421283, and http://en.securitylab.ru/notification/285890.php.
-
8/2/2019 Integrating Software Security Into the Software Development Life Cycle (SDLC)
10/23
10
the management of the code is fundamental to the codes security because a breach of the codes storage locations
could be devastating to the integrity of the code and the application itself. As such, the code should be carefully
monitored for rogue developers, malicious intruders, and accidental or unjustified modifications. The code must
also be monitored to ensure that concurrent development is handled in a safe manner whereby integration of the
code or branches will not result in insecurities or bugs.
4.2.1 Observing Secure Programming PracticesA good design is key to a well-built, well thought-out application, but even with a solid design, an application
may be coded such that it contains numerous flaws. As such, it is important that all of the developers follow the
standards and guidelines put forth by the design team, as well as the design plans themselves. It is also important
that the development team take heed to Section 4.1 and ensure that data integrity mechanisms are used and are
sufficient to detect (and in some cases recover from) the loss of integrity. Development teams should be sure to
follow a structured development model, especially on large projects or those where modifications to the same file
are likely to be concurrent. It is imperative that during both the design and development phase, best practices and
guidelines from the language creators and maintainers (if available such as with PHP and Java) are used alongside
industry and security best practices (see Appendix B Secure Programming Guideline Repositories for some of the
more common repositories for such information). The development team must also ensure that the design is sound
and will not create unforeseen problems; if such problems are noticed, the design may need to be reviewed and
revised.
Secure development should include having the source code examined for security flaws. While security flaws
are often thought of as concerns held exclusively by security professionals, most of the flaws hold functionality
concerns as well. For example, buffer overflows and integer overflows are major security concerns. If a buffer or
integer were to overflow were to occur, the application could (at best) crash, which is functionally undesirable.
While it is true that security is largely concerned with the misuse of the flaw, few people would find it desirable for
an application to crash in the middle of something important (as defined on a user-by-user basis), especially if the
data is lost. Another example is that of command injection. While it is unlikely, depending on the application, that
someone accidentally injects a command that is successfully executed, if a malicious user were to leverage that to
create a denial of service or perform harmful activities, the functional impacts can be severe. In order to be both
effective and efficient, it is best for development teams to perform peer reviews when changes are made and toleverage that process by having the code also examined for security flaws. If CLIP had been properly examined for
functionality problems, the double deallocation would likely have been caught; if CLIP had been examined at all by
the development team for security problems, it would likely not have contained more than thirty-eight thousand
flaws (Owens, CT&E Report for the CLIP Release 1.2.1.26 2008).
4.2.2 Securing The RepositoryOn May 17, 2001, an attacker managed to use a compromised Secure Shell (SSH) client on a legitimate users
computer to gain access to the Apache Software Foundations main servers. After gaining initial access, the attacker
used a vulnerability in the OpenSSH server on one of Apaches computers to escalate privileges. After this, the
attacker began replacing files with Trojanized files, designed to further penetrate the network. The attackers
activities were detected during a nightly audit of the systems binary files and Apache immediately took the
compromised server offline. The server turned out to be one of Apaches major servers--handling the public mailinglists, web services, source code repositories of all of the Apache projects, and the binary distribution of the projects.
Apache had to install a fresh operating system, change all of the passwords, remove the backdoors, and then begin
the process of using both manual procedures and automatic processes to determine what and if any source code and
binaries files related to the projects were changed. (Frields 2008)(Apache Software Foundation 2001)
Source code is more human-readable in many cases than the compiled version of the code. Additionally, source
code often times also includes comments that may help to understand the code, while compiled code does not
normally include such information. Because it is easy to modify source code to contain a hidden backdoor, logic
-
8/2/2019 Integrating Software Security Into the Software Development Life Cycle (SDLC)
11/23
11
bomb, time bomb, or other malicious code that is not likely to be found in a large project, the source code is often a
target of disgruntled employees, employees who did not leave on the best of terms, and otherwise malicious or
curious persons who gain or have access to the repository. (Howard and LeBlanc 2003)(Pfleeger and Pfleeger
2007)(Seacord 2005)
Since the source code can be modified for malicious purposes with little or no knowledge of the actual
application and because it is constantly modified by authorized users, it must be well protected against unauthorizedaccess or modification. An authorized user should only be allowed to modify specific sections of code when
directed to and under no other circumstances. Furthermore, such access should be routinely audited, looking for
suspicious activity. As such, it is important that the repository have access controls and auditing such that all
changes can be traced to a specific user at a specific date and time. If this cannot be achieved, the source code is not
secure and must be assumed to have been tampered with.
If malcode is entered into the repository, source code audits and code reviews should be performed on the
entire code base to ensure that the software is safe. Without this assurance, the software can only be assumed to be
malware and harmful to the system upon which the compiled code is placed. For this reason, it is important that the
developer keep track of all changes, verify that the changes are valid, and continuously audit user access rights. If a
breech occurs, its cost is markedly high given the cost of examining all code both manually and through automated,
as the Apache case study illustrated.
5 Application TestersMany of the design and implementation flaws that exist today were either not caught during testing or were
ignored. The testers verify that the software does what it is required to do, what it is supposed to do, and behaves
appropriately under all conditions. Tests are normally well-structured and well-documented procedures that are
designed to create reproducibility and allow for quick and easy regression testing. This structured testing process,
however, can reduce the amount of data that actually comes out of testing and often results in the simplistic pass and
fail mentality.(Dowd, McDonald and Schuh 2006) While it is logical to create structured test procedures for most
tests, the testing team must be afforded flexibility and encouraged to follow any leads discovered during testing to
maximize the effectiveness of the testing.
Positive testing is one of the most common forms of testing during development--that is to say, many of the
tests performed against software verify that the software does what itshoulddo.(Dowd, McDonald and Schuh 2006)
Negative testing is far more complex and generally involves a greater number of possible tests. For example, if
users are allowed to input any positive number between twenty and fifty test teams will often verify that the
software operates properly when those numbers are provided and may check a handfull of other numbers (such as -1,
0, 1, 19, and 51). It would consume far too much time to check the softwares behavior when presented every single
value that a user could input. The test team may have performed some basic bounds checking, but it is far less likely
that types were checked. For example, suppose that a were input instead of a number. If the software does not
properly validate the data it may crash, it may convert a to some sort of a number, or it may just pass the input
straight to a backend. Test teams should always try to provide multiple data types to a given input.
Another common mistake that developers make that may or may not be tested by test teams envolves
encodings. For example, an application may expect input to be encoded using ASCII, but the user may be inputting
data using UTF-8. There have been numerous vulnerabilities against such software as Apache Tomcat, Apache
HTTP Server, and Internet Information Services (IIS) that stemmed from allowing multiple encodings without
properly verifying user input or the encoding being used. Test teams, as a result, should check the encoding and
language requirements and ensure that the application behaves appropriately when other encodings are presented,
but also when the allowed encodings are presented.
-
8/2/2019 Integrating Software Security Into the Software Development Life Cycle (SDLC)
12/23
12
Since more and more software is either multithreaded or multiprocess-based, test teams should attempt to
trigger race conditions. While there are numerous ways to do this, the most common method is to slow down the
application by either slowing down the system clock or by feeding massive amounts of data to the system and
lowering the priority of the process (or processes). Many times, however, tests for race conditions can only be
generalized and partially documented because of the nature of the flaw and the way that most race conditions
operate. These tests will require the test team to be flexible and to adjust the strategy to not only the application, but
also the operating conditions and the reactions of the system.
There are numerous other flaws that should be tested for such as memory leaks, double allocation, wild
pointers, underflows, logic bombs, input validation, input sanitization, data protection, inadequate access controls,
injections (command, SQL, LDAP, etc), integer overflows, format string vulnerabilities, and buffer overflows.
Many times, the most effective means of testing for a wide variety of problems is through fuzz testing. There are
many different fuzzing applications available that allow the fuzzing of numerous protocols and specifications.
Fuzzing allows testers to test a large amount of input--both valid and invalid--in an automated, generally
unsufisticated manner, much like brute-forcing a password. If possible, the test team should couple the testing
activities with an exhastive fuzzing of all input methods available.
6 Release Managers And Application OwnersIf software is properly sponsored, designed, developed, and well tested, it is less likely to contain serious flaws.
Once compiled, however, the application still faces multiple threats and must be guarded by both the developers and
those who purchase the software or software license. The majority of this burden falls upon the release managers
and the application owners--the clients or people who are actually deploying the software.
Once the source code is compiled and distributed or located such that the binaries may retrieved by others (e.g.
on a webserver), the binary files and all other files (e.g. configuration files) are also vulnerable to replacement. For
this reason, the files must be subjected to change management and protected, just as the source code was.
Modifying binary files to include malcode can be accomplished, but likely requires the attacker to decompile the
binaries or use a debugger if the attacker does not want the application to crash following the execution of the
malcode. In most instances, the attacker would need to understand assembly to insert the malcode, but this isnt arequirement, particularly if the attacker can gain access to the source code--thus, source code and the binary files
should never be colocated--or if the binary file is simply byte-code (e.g. a Java .class file or a Flash SWF file).
In the week prior to August 22, 2008, the Fedora Project and Red Hat detected a large compromise of multiple
systems on their networks. The impact of the compromise on the two networks were different, as the systems
compromised were difference. One of the more important systems compromised on Fedoras network was a
system used to sign packages (the signature verifying the integrity of the projects source and binary files). Fedora
had verify the integrity of the binary and source files under its control as well as change the key used to sign
packages. Red Hat was not as lucky because checks on file integrity showed that the attacker managed to modify
and then sign a series of packages relating to OpenSSH. In both cases, the integrity of the binaries that are
distributed to the user base was questioned. The organization (Fedora being owned by Red Hat) was already using
signing to verify package integrity, which helped to detect the modifications. Red Hat users who downloaded themalicious updates could not, however, have known that the repository was cracked because the attacker used the
Red Hat key to sign the malicious packages (at the time of this writing, Fedora maintains that the attacker did not
modify any Fedora update packages). Red Hat has since had to release a patch to replace the malicious binaries with
the proper ones on any system that automatically patched to the malicious versions. (Frields 2008)(Red Hat
Network 2008)
-
8/2/2019 Integrating Software Security Into the Software Development Life Cycle (SDLC)
13/23
13
The above case studies from Red Hat and Fedora illustrate why it is important to safeguard the binary files,
especially at their distribution point. The aforementioned Apache case study (Section 4.2.2) also illustrates this
point. In all of the cases, the files had to be painstakingly gone through to ensure that they had not been tampered
with. In the case of Red Hat and Fedora, the Byzantine Generals Problem held true because the faulty device was
detectable by comparing the hash and validating the signatures--assuming that one did both using multiple, distinct
sources. While doing this made it easier to detect that integrity was lost in the binary distribution files, it still did not
replace the need for tedious, manual review of all of the files because a simple requirement in the Byzantine
Generals Problem cannot be met by current (or practical) technology: a non-faulty devices signature cannot be
forged by a faulty device. As such, a manual review following a breech is required, but automated means can
certainly help to detect the initial breech.
To best secure the applications files, the release manager should ensure that secure installation and operation
documentation are created and made available to the application owner. The documentation should list any ports
and protocols used by the application, to aid firewall administrators. The documentation should also provide
instructions to secure the application, such as setting proper access controls in the application, installing only what is
required by the application owner, setting access controls on the applications files, and anything else that will
provide a more secure operating environment. In short, the documentation should provide the application owner the
ability to reduce the applications attack surface and mitigate many unknown vulnerabilities. The application owner
should then be certain to follow the documentation.
7 ConclusionThis paper helps to define a secure development model that can integrate into most existing software
development lifecycles. The paper also explains the duties that each person--from management downshould
perform to help facilitate the model and create a secure application. Management must take an active role in the
development process and ensure that security is given proper funding and attention. Designers must be certain to
completely understand the architecture upon which the application is to operate, design coding standards and
guidelines that ensure the software is written securely, and create a secure design. The programmers must adhere to
the standards and guidelines set forth by the designers and be certain to code in a secure manner. The test team must
perform both positive and negative testing, follow a test plan while remaining flexible enough to discover issues that
detailed plans could not find, and test all input. The release manager and application owner must make certain that
the binaries are protected such that they cannot be replaced or modified by unauthorized persons and that the
application is installed and operated in the most secure manner possible. As this paper explains, if any of the above
personnel fail in their duties, the software may be operated with egregious vulnerabilities that could be exploited by
malicious attackers or accidentally triggered by normal operation.
-
8/2/2019 Integrating Software Security Into the Software Development Life Cycle (SDLC)
14/23
14
Appendix A - BibliographyApache Software Foundation. "Apache.Org compromise report, May 30th, 2001: Unauthorized Access to
Apache Software Foundation Server."Apache. May 30, 2001. http://www.apache.org/info/20010519-hack.html
(accessed September 1, 2008).
Bacon, Jean, and Tim Harris. Operating Systems: Concurrent and Distributed Software Design. London:
Addison-Wesley, 2003.
Defense Information Systems Agency.Application Security and Development Checklist. Version 2, Release 1.
2008.
.Application Security And Development Security Technical Implementation Guide. Version 2, Release 1.
2008.
Dowd, Mark, John McDonald, and Justin Schuh. The Art of Software Security Assessment: Identifying and
Preventing Software Vulnerabilities. New York: Addison-Wesley Professional, 2006.
Espiner, Tom.Lawyers: Vista branding confused even Microsoft. November 28, 2007.
http://news.zdnet.com/2100-9595_22-177815.html?tag=nl.e550 (accessed November 2, 2008).
Frields, Paul W. "Infrastructure report, 2008-08-22 UTC 1200."Fedora Project. August 22, 2008.
https://www.redhat.com/archives/fedora-announce-list/2008-August/msg00012.html (accessed September 1, 2008).
Hansell, Saul. "Microsoft Tries to Polish Vista."New York Times, July 22, 2008: Bits.
Howard, Michael. "How Do They Do It?: A Look Inside the Security Development Lifecycle At Microsoft."
MSDN Magazine, November 2005.
Howard, Michal, and David LeBlanc. Writing Secure Code. 2nd Edition. Redmond, Washington: Microsoft
Press, 2003.
Lamport, Leslie, Robert Shostak, and Marshall Pease. "The Byzantine Generals Problem."ACM Transactions
on Programming Languages and Systems, July 1982: 382-401.
Lethbridge, Timothy C., and Robert Laganiere. Object-Oriented Software Engineering: Practical software
development using UML and Java. 2nd Edition. New York: McGraw Hill, 2005.
Lohr, Steve. "Et Tu, Intel? Chip Giant Wont Embrace Microsofts Windows Vista."New York Times, June 25,
2008: Bits.
Mao, Wenbo.Modern Cryptography: Theory and Practice. Boston: Prentice Hall, 2003.
Mockapetris, Paul. "DomainNames - Implementation And Specification (RFC 1035)." The Internet Engineering
Task Force - Request for Comments. November 1987. http://www.ietf.org/rfc/rfc1035.txt (accessed September 4,
2008).
Owens, Daniel. Certification Test & Evaluation Report for the Common Link Integration Processing (CLIP)
Release 1.2.1.26. Certification Test and Evaluation Report, San Diego: Booz Allen Hamilton, 2008.
-
8/2/2019 Integrating Software Security Into the Software Development Life Cycle (SDLC)
15/23
15
Owens, Daniel. Common Link Integration Processing (CLIP) Change Management. Whitepaper, San Diego:
Booz Allen Hamilton, 2008.
Owens, Daniel. Common Link Integration Processing (CLIP) Integrity. Whitepaper, San Diego: Booz Allen
Hamilton, 2008.
Owens, Daniel, Lawrence Lauderdale, Karen Scarfone, and Murugiah Souppaya.NIST 800-118: Guide toEnterprise Password Management (Draft). National Institute of Standards and Technology, 2008.
Pfleeger, Charles P., and Shari L. Pfleeger. Security in Computing. 4th Edition. Westford, Massachusetts:
Prentice Hall, 2007.
Poeter, Damon.Microsoft, Not Intel, Scrapped Vista Capable Hardware Requirements. November 17, 2008.
http://www.crn.com/software/212100378?cid=microsoftFeed (accessed November 18, 2008).
Red Hat Network. "Critical: openssh security update - RHSA-2008:0855-6."Red Hat Support. August 22,
2008. http://rhn.redhat.com/errata/RHSA-2008-0855.html (accessed September 1, 2008).
Seacord, Robert C. Secure Coding in C and C++. Boston: Addison Wesley, 2005.
Security Across the Software Development Lifecycle Task Force.Improving Security Across The Software
Development Lifecycle. National Cyber Security Summit, 2004.
Software Process Subgroup of the Task Force on Security across the Software Development Lifecycle.
Processes to Produce Secure Software: Towards more Secure Software. Vol. I. National Cyber Security Summit,
2004.
Stallings, William, and Lawrie Brown. Computer Security: Principles and Practice. Upper Saddle River, New
Jersey: Prentice Hall, 2008.
The Amazon S3 Team.Amazon S3 Availability Event: July 20, 2008. July 20, 2008.
http://status.aws.amazon.com/s3-20080720.html (accessed August 19, 2008).
-
8/2/2019 Integrating Software Security Into the Software Development Life Cycle (SDLC)
16/23
16
Appendix B Secure Programming Guideline Repositories
Government Security Guideline RepositoriesDraft Security Technical Implementation Guides and Security Checklists, Defense Information Systems Agency
(DISA), http://iase.disa.mil/stigs/draft-stigs/index.html.
National Institute of Standards and Technology Draft Publications, National Institute of Standards and
Technology (NIST), http://csrc.nist.gov/publications/PubsDrafts.html.
National Institute of Standards and Technology Special Publications (800 Series), National Institute of
Standards and Technology (NIST), http://csrc.nist.gov/publications/PubsSPs.html.
National Security Agency Security Configuration Guides, National Security Agency (NSA),
http://www.nsa.gov/snac/downloads_all.cfm?MenuID=scg10.3.1.
Security Checklists, Defense Information Systems Agency (DISA),http://iase.disa.mil/stigs/checklist/index.html.
Security Technical Implementation Guides, Defense Information Systems Agency (DISA),
http://iase.disa.mil/stigs/stig/index.html.
Vendor Security Guideline RepositoriesColdFusion Developer Center, Adobe, http://www.adobe.com/devnet/coldfusion/security.html.
Java SE Security, Sun Microsystems, http://java.sun.com/javase/technologies/security/.
Microsoft Developer Network .NET Security, Microsoft, http://msdn.microsoft.com/en-
us/library/aa286519.aspx.
Microsoft Developer Network Security Developer Center, Microsoft, http://msdn.microsoft.com/en-
us/security/default.aspx.
PHP, The PHP Group, http://www.php.net/manual/en/.
Visual Studio Development Tools and Languages, Microsoft, http://msdn.microsoft.com/en-
us/library/aa187916.aspx.
-
8/2/2019 Integrating Software Security Into the Software Development Life Cycle (SDLC)
17/23
17
Appendix C Sample Unsafe Function List
C/C++
Unsafe Function Alternative
Memcpy No safer alternative; check for overflows and type safety
Memcmp No safer alternative; check for overflows and type safety
Memcat No safer alternative; check for overflows and type safety
Memmove No safer alternative; check for overflows and type safety
Memset No safer alternative; check for overflows and type safety
Memchar No safer alternative; check for overflows and type safety
Printf Vprintf
Sprintf Vsnprintf
Putc Putc_unlocked
Putchar Putchar_unlocked
Getc No safer alternative; check for overflows
Read No safer alternative; check for overflows
Strlen Sizeof/Strlen_s
Strcmp Strncmp/Strlcmp
Strcpy Strncpy/Strlcpy
Strcat Strncat/Strlcat
Scanf No safer alternative; check for format string vulnerabilities and overflows
Sscanf No safer alternative; check for format string vulnerabilities and overflows
Fscanf No safer alternative; check for format string vulnerabilities and overflows
Strtok Strtok_r
Atoi/Atol Strtol
Atod/Atof Strtod
-
8/2/2019 Integrating Software Security Into the Software Development Life Cycle (SDLC)
18/23
18
Unsafe Function Alternative
Atoll Strtoll
Fprintf Vfprintf
Gets No safer alternative; check for overflows
Fgets No safer alternative; check for overflows
Crypt Crypt_r
System No safer alternative; parameters must be carefully checked
Exec* (e.g. Execlp) No safer alternative; parameters must be carefully checked
Strerror Strerror_r
Chmod Fchmod
Usleep Nanosleep/Setitimer
Ttyname Ttyname_r
Readdir Readdir_r
Ctime Ctime_r
Gmtime Gmtime_r
Getlogin Getlogin_r
Snprintf Vsnprintf
Popen No safer alternative; parameters must be carefully checked
ShellExecute No safer alternative; parameters must be carefully checked
ShellExecuteEx No safer alternative; parameters must be carefully checked
Setuid No safer alternative
Setgid No safer alternative
ASP/.NET
Unsafe Function Alternative
Stackalloc No safer alternative; ensure that the functions using this
memory do not overflow
Strcpy/Wcscpy/Lstrcpy/_tcscpy/ Strlcpy
-
8/2/2019 Integrating Software Security Into the Software Development Life Cycle (SDLC)
19/23
19
Unsafe Function Alternative
_mbscpy
Strcat/Wcscat/Lstrcat/_tcscat/_mbscat Strlcat
Strncpy/Wcsncpy/Lstrcpyn/_tcsncpy/
_mbsnbcpy
Strlcpy
Strncat/Wcsncat/_tcsncat/_mbsnbcat Strlcat
Mem*/CopyMemory _memc*/_memccpy
Sprintf/Swprintf StringCchPrintf
_snprintf/_snwprintf StringCchPrintf
Printf/_sprintf/_snprintf/vprintf/vsprintf/
Wide character variants of the above
StringCchPrintf
Strlen/_tcslen/_mbslen/Wcslen StringCchLength
Gets Fgets
Scanf/_tscanf/wscanf Fgets
>> Stream.width
MultiByteToWideChar No safer alternative; check for overflows
_mbsinc/_mbsdec/_mbsncat/_mbsncpy/
_mbsnextc/_mbsnset/_mbsrev/_mbsset/
_mbsstr/_mbstok/_mbccpy/_mbslen
No safer alternative; check for overflows and othervulnerabilities
CreateProcess/CreateProcessAsUser/
CreateProcessWithLogon
No safer alternative; parameters must be carefully checked
WinExec/ShellExecute No safer alternative; parameters must be carefully checked
LoadLibrary/LoadLibraryEx/SearchPath No safer alternative; parameters must be carefully checked
TTM_GETTEXT No safer alternative; check for overflows
_alloca/malloc/calloc/realloc Usenew
Recv WSAEventSelect
IsBadXXXPtr No safer alternative; check for overflows, race conditions,
error handling vulnerabilities, denial of service vulnerabilities,
-
8/2/2019 Integrating Software Security Into the Software Development Life Cycle (SDLC)
20/23
-
8/2/2019 Integrating Software Security Into the Software Development Life Cycle (SDLC)
21/23
21
System No safer alternative; parameters must be carefully checked
Exec No safer alternative; parameters must be carefully checked
Open No safer alternative; parameters must be carefully checked
Glob No safer alternative; perform the listing and then sort
Eval No safer alternative
Goto No safer alternative
Dbmclose Untie
Dbmopen Tie
Gethostbyaddr No safer alternative; do not trust the result
Do &SUBROUTINE
/e No safer alternative
| (OR) No safer alternative
$# Depreciated
$* /m or /s
$[ 0
` (Backtick) No safer alternative; backticks, to include qx{}, should be avoided
Any functions,methods, or classes
marked depreciated
If possible, do not use the unsafe code; if required, ensure that memory problems,overflows, and other security issues are protected against
PHP
Unsafe Function Alternative
System No safer alternative; parameters must be carefully checked
Exec No safer alternative; parameters must be carefully checked
Shell_exec No safer alternative; parameters must be carefully checked
Popen No safer alternative; parameters must be carefully checked
Passthru No safer alternative; perform the listing and then sort
Eval No safer alternative
-
8/2/2019 Integrating Software Security Into the Software Development Life Cycle (SDLC)
22/23
22
Unsafe Function Alternative
Mysql_* Use Mysqli_ and prepared statements
` (Backtick) No safer alternative; backticks should be avoided
Any functions,methods, or classes
marked depreciated
If possible, do not use the unsafe code; if required, ensure that memory problems,overflows, and other security issues are protected against
Python
Unsafe Function Alternative
Os.system No safer alternative; parameters must be carefully checked
Exec No safer alternative; parameters must be carefully checked
Os.popen No safer alternative; parameters must be carefully checked
Execfile No safer alternative
Eval No safer alternative
Input Raw_input
Compile No safer alternative
Marshal Module No safer alternative
Pickle Module No safer alternative
Rexec Deprecated
Any functions,
methods, or classes
marked depreciated
If possible, do not use the unsafe code; if required, ensure that memory problems,
overflows, and other security issues are protected against
ColdFusion
Unsafe Function Alternative
-
8/2/2019 Integrating Software Security Into the Software Development Life Cycle (SDLC)
23/23
23
Unsafe Function Alternative
No safer alternative
Raw_input
IsAuthorized Deprecated
Ensure proper server configuration and sandbox
Ensure proper server configuration and sandbox
Any functions,
methods, or classes
marked depreciated
If possible, do not use the unsafe code; if required, ensure that memory problems,
overflows, and other security issues are protected against