Secure Software Development

06.

Today's apps and software are lagging in security. As technology advances, many business are consumed by the pressure to stay competitive and push for new features without considering security to the public.

Sample Documents

Secure Software Design

A review of past vulnerabilities and secure coding methods

Securing Office 365

Risks and solutions with O365 deployment

Topic Examples

Why secure software design is important

Risks and solutions with O365 deployment

Defining and applying security concepts from the past

Dangers in open source libraries used is most applications 

Lessons learned from the OpenSSL vulnerability

The document reviews OpenSSL and the accompanying software vulnerabilities, understanding the type of risks involved in similar software, and how to mitigate against future bugs.

It's no surprise that today's apps and software are lagging in security. We see it in the news almost every week. As technology advances, many business are consumed by the pressure to stay competitive and push for performance without considering the security implications to the public. The new security culture grows ethical responsibilities on the owners and managers of new products to design secure applications to avoid fallout.
 

The Forbes article "Fostering a Culture of Cyber Security" highlights this push and the balance to make it work: "While cybersecurity may be every executive’s job, there is frustration at every level of the corporation while trying to make progress. Ask any gathering of executives on the topic of security and there will be plenty of stories shared about having too much security or not enough" (Chen, 2018).
 

Secure software design should be at the top of the priority list for software companies. When we take a look at how external attacks are carried out, almost 80% of the attacks involve a software vulnerability or a web application attack. The Forrester report "The State of Application Security, 2018" presents a summary of how external attacks were carried out:

forresterdata_1_orig.png

Many of cyber security experts are continuously claiming the security challenges are growing as we approach the next decade. The next chart from the Forrester report shows one reason why. It is the amount of vulnerabilities discovered every year in very popular software packages which shows exponential growth.

opensource-sec_1_orig.png

The Forrester study cites the following reason for the increase: "...open source developers are not executing prelease vulnerability scans, sosftware composition analysis (ScA) and frequent updates to open source components will be the responsibility of the enterprise" (DeMartine, 2018).
 

The question becomes where to allocate resources when it comes to security without bloating the budget?  The Forrester Report "The State of Application Security, 2018" may provide some hints. The matrix shown below present where industries have implemented cyber security technologies:

apptechsec_1_orig.png

From the respondents in their security Survey from 2017, we see the retail industry planning to spend heavily in anti-bot technology, telecommunications to spend on interactive security testing and runtime application self-protection, financial services focusing on penetration testing and software composition analysis.
 

Recommendations from the Study
The data presents a great responsibility on owners, coders, and enterprises to professionally review application code. Those responsible can start by using the following recommended processes from the study:
 

1. Interactive developer tools will have to spawn integrations themselves.
In Forrester’s recent evaluation of ci tools, the products demonstrating the best integration capabilities use webhooks. Webhooks are callback methods in an application that eliminate the need for plug-ins or do-it-yourself integrations. For example, when prerelease security tools are hooked to ci or other automation tools, they could automatically receive calls to trigger a scan at check-in before a build. This kind of integration saves security and application pros from having to make complex and nonstandard integrations for each application and each scanning tool based on project specifications.
 

2. Application tools must support inherent policy setting for consistency.
Today, some integrations between security scanners and developer tools rely on configuration and coding to initiate scans, process results, and take actions such as failing a build. What’s worse, requirements might be different depending on whether the application is developed in-house or by a third party or at what point in the SDlc the scan takes place. To make the process scalable, security pros need to be able to set policy within the prerelease security tools, so the timing and type of scans will be based on a consistent consideration of application types, business units, or other company needs.


For information about application hardening, the Gartner Report on "Market Guide for Application Shielding" provides a review of vendors in the market and findings for developers to be aware of. Here is an except on the recommendations:

Security and risk management leaders responsible for application security:
 

  • Choose application shielding to protect high-value applications that run within untrusted environments and that move the software logic on the front end. Candidates will be mobile apps, progressive web apps, server-less and single-page web apps and consumer-facing apps, as well as software on connected devices.
     

  • Assess your current application security and management solutions to determine whether a stand-alone application shielding solution is needed or whether you can leverage functionality from your incumbent web application firewall (WAF), runtime application self-protection (RASP) and other solutions.
     

  • Avoid using application shielding as a replacement for application security testing and vulnerability patching.


 

References
Chen, B. (2018, August 28). Fostering a Culture of Cyber Security. Retrieved from https://www.forbes.com/sites/forbestechcouncil/2018/08/28/fostering-a-culture-of-cybersecurity/#264e752183ab
 

DeMartine, A. (2018, January 23). The State of Application Security, 2018. Retrieved from https://cdn2.hubspot.net/hubfs/3887453/Resources/whitepapers/The%20State%20Of%20Application%20Security,%202018.pdf

Zumerle, D., Bhat, M. (2018, June 27). Market Guide for Application Shielding. Retrieved from https://www.gartner.com/doc/3747622/market-guide-application-shielding

 
548091790.jpg

Securing Office 365

Review the risks and solutions with O365 Deployment

 

Breaking Down Past Technology Concepts that Help Understand Today's Security

The following is a brief discussion on four technical papers that discuss ideas utilizing the Trusted Platform Module for secure boot and application running. TPM is a hardware chip in the device that secures encryption keys for encrypted mechanisms on the device. Many of today's Windows devices currently use TPM as vendors embrace the technology. For example, to utilize BitLocker for encrypting the mobile device hard drive, you need to have the TPM chip enabled in the BIOS.
 

If you cannot follow the concepts you can also use this write-up to help fall asleep.
 

Bootstrapping Trust in Commodity Computers by B. Parno, J. McCune & A. Perrig from 2010
 
The authors revisit the concept of “bootstrapping trust” that predated the current “trusted computing” trend for computer security. The research focuses on the mechanisms to securely collect and store information regarding a commodity computer’s execution environment. The bootstrapping mechanism relies on a foundation concept called “root of trust” and involves the hardware component. Root of trust is based on the secrecy of a private key (inside the processor) that is certified by the hardware manufacturer.
 
The paper additionally goes over the concept of “code identity” which is measured by the root of trust hardware. The research continues to provide an overview of TPM functions and commercial applications. Then, it uses real world examples of bootstrapping in Code Access Security in Microsoft.NET, Bitlocker, Trusted Network Connect (TNC), and secure boot in mobile phones.
 

The conclusion of the paper results in a foundation for link between an efficient and cost-effective TPM using bootstrapping that is connected to trusted applications designed for ensuring protection of the owner's device. From the paper, we see the beginning stages of efficient uses of TPM.
 
 
Trust Visor: Efficient TCB Reduction and Attestation by J.M. McCune, N. Qu, Y. Li, A. Datta, V. D. Gligor, & A. Perrig from 2010
 
This publication presents a special purpose application called “TrustVisor” that is explained as a “hypervisor that provides code integrity as well as data integrity and secrecy for selected portions of an application.”  The authors claim the TrustVisor provides high level security by both protecting sensitive code and containing a small code base to secure its validity. The focus of the research revolves around the data secrecy and execution integrity without slowing performance of the legacy system. The TrustVisor is designed to provide measured, isolated execution environment for security-sensitive code modules without trusting the OS or the application that invokes the code module.  The system is initialized through a system similar to dynamic root of trust for measurement (DRTM)  process called the TrustVisor Root of Trust for Measurement ( TRTM). TRTM interacts with a TrustVisor’s micro-TPM (μTPM) and executes on the platform’s primary CPU.

The example adversary  presented in the paper can execute arbitrary code as part of the legacy OS and has access to DMA-capable devices resulting in rootkits and Trojans. Additionally, the example adversary can block, inject, or modify network traffic between entities in the system without cracking cryptographic functions. Trust Visor hopes to thwart an adversaries advances by vigilantly interrogating all executable code.
 
Goal of TrustVisor is to enable Pieces of Application Logic (PALS) in total isolation from legacy OS and DMA capable devices allowing secured execution of code blocks. The system provides memory protection mechanism, hardware memory protections, life-cycle protection. The memory protection mechanism is designed to restrict memory regions using the IOMMU (inut/output memory management unit) and configures its page tables to exclude machine pages that must remain inaccessible. Notable statistics include that this system only uses 6351 lines of code and uses only 7% of overhead of the legacy system.
 
Proceedings of the 13th USENIX Security Symposium: Design and Implementation of a TCG-based Integrity Measurement Architecture by R. Sailer, X. Zhang, T. Jaeger, & L. Doorn from 2004
 
This paper discusses the “design and implementation of a secure integrity measurement system for Linux”. The authors claim their articular design extends the Trusted Computing Group Concepts (TCG) standards by extending TC to the BIOS layer. The argument for TC on the BIOS layer is strong as rootkits will not be detectable with current standards of TC.  This concept is called “trusted boot”, and uses a separate processor/chip called a trusted platform module (TPM).  The example used in the paper is a server machine running Apache Webserver and Tomcat Web Containers on a RedHat 9.0 Linux environment. To establish integrity of the boot, they establish a measurement system of the Linux kernel that manages processes of the BIOS and grub bootloader.

Three main components of the architecture are as follows: First, the measurement mechanism on the system determines what parts of the run-time environment to measure, when to measure, and how to securely maintain the measurements. Second, the integrity challenge mechanism authorizes challengers to retrieve measurement lists of a computing platform and verify their freshness and completeness. Third, the integrity validation mechanism validates the measurement list is complete, non-tampered, and all individual measurements entries of runtime components describe trustworthy code and configuration files. The interactions are completed by measurement agents that store the extension of the measurement list to the TPM and report the extension of the measurement list to the TPM. Notable statistics include the project took only 4000 lines of code and only 250-500 measurements until system is fingerprinted.

Below  is more details of the three major components:

Measurement Mechanism is designed to enable the Linux kernel to measure changes, against itself and the creation of user-level processes. The mechanism runs a SHA1 hash and uniquely identifies file types, and stores individual hashes into a measurement list. The TPM module is used to protect the measurement list. The platform configuration registers (PCR) located in the TPM, provides protected data registers using TPM_extend. The measurement mechanism provides protection against corrupted systems as it detects and re-computes the aggregate of the list secured in the TPM.
 

Integrity Challenge Mechanism is a protocol to securely retrieve measurements and validation information from the attestation system. The mechanism protects against the following threats: One, replay attacks that use prior aggregate information before the system was corrupted. Second, tempering of the measurement list or TPM before or during transmission. Third, masquerading attacks that replace the original measurement list and TPM with another non-compromised system. The mechanism used to protect against these threats is a secure SSL connection using a 2048-bit AIK created securely inside the TPM.
 

Integrity Validation Mechanism validates the individual measurement of the attestation party’s platform configuration and the dynamic measures on the system after it is rebooted. The mechanism uses TPM counters throughout the lifetime of the TPM.

The design and execution of the integrity checking mechanism provide protections from the BIOS to the application level for Linux Systems. We see that same or similar technology has been introduced in certain Red Hat Enterprise Linux (RHEL) operating systems.
 
Flicker: An Execution Infrastructure for TCB Minimization by J.M. McCune, B. Parno, A. Perrig, M.K. Reiter, & H. Isozaki from 2008
 
Flicker is presented as “an infrastructure for executing security sensitive code in a complete isolation while trusting few as 250 lines of code.” The research claims that Flicker provides protection when BIOS, OS, and DMA-enabled devices are malicious. The research uses and AMD platform to demonstrate how it leverages commodity processors without a new OS or VMM.
 
Flicker utilizes a capability called the late launch found on the AMD’s secure virtual machine (SVM). The SVM allows a late launch of the VMM or security kernel at an arbitrary time with built-in protection against software based attacks. Flicker uses this function to pause the OS and launch the SKINIT instruction to resume the section environment. This isolated execution allows for incremental security by using piece of application logic (PAL).
 


SUMMARY: Why is all this good to know?
 

The conclusion of all four papers appear lay great responsibility on the TPM hardware and secure transmission and reporting policies. TPM is currently one of the core technologies used today to help protect devices. All four research papers provide a background and research history of TPM utilization, TCB standards implementation, bootstrapping, and technology that these technologies are currently embedded in. We can see the incremental security advancements and alternatives for attacks outside or persistent in the system.
 
 
References
 
Datta, A. (2009).  Trust Visor: Efficient TCB Reduction and Attestation. Retrieved from: https://ole.sandiego.edu/bbcswebdav/pid-947811-dt-content-rid-3786758_1/courses/CSOL-560-MASTER/McCun_10_Trustvisor_efficient_tcb_reduction_and_attestation.pdf
 
J. McCune, B. Parno, A. Perrig, M. Reiter, & H. Isozaki. Flicker: An Execution Infrastructure for TCB Minimization. Retrieved from: https://ole.sandiego.edu/bbcswebdav/pid-947811-dt-content-rid-3786760_1/courses/CSOL-560-MASTER/McCun_08_Flicker.pdf
 
McCune,  & Parno, B. & Perrig, A. (n.d.). Bootstrapping Trust in Commodity Computers. Retrieved from: https://ole.sandiego.edu/bbcswebdav/pid-947811-dt-content-rid-3786757_1/courses/CSOL-560-MASTER/PaMcPe2010.pdf
 
USENIX Association. (2004). Proceedings of the 13th USENIX Security Symposium: Design and Implementation of a TCG-based Integrity Measurement Architecture. Retrieved from: https://ole.sandiego.edu/bbcswebdav/pid-947811-dt-content-rid-3786759_1/courses/CSOL-560-MASTER/Sailer_04_Design_and_Implementation.pdf
 

Vulnerabilities in OpenSSL & Crypto Libraries

What is OpenSSL and why is it important for secure network operations?

Open SSL is described on their website as “a robust, commercial-grade, and full-featured toolkit for the Transport Layer Security (TLS) and Secure Sockets Layer (SSL) protocols. It is also a general-purpose cryptography library” (OpenSSL.org).  Essentially it’s how many HTTPS connections on the web are encrypted/secure and displayed with the green lock symbol in the address bar.

How was the Heartbleed bug introduced? What was the bug?

Synopsys explains: 
"The bug got its start from improper input validation in the OpenSSL implementation of the TLS Heartbeat extension. Due to the missing bounds check on the length and payload fields in Heartbeat requests, coupled with trusting the data received from other machines, the responding machine mistakenly sends back its own memory data.

During a TLS encrypted handshake, two machines send each other Heartbeat messages. According to RFC 6520, a Heartbeat response needs to contain the exact copy of the payload from the Heartbeat request. When a Heartbeat request message is received, the machine writes the payload contents to its memory and copies the contents back in response. The length field is meant to be the length of the payload. OpenSSL allocates memory for the response based on length and then copies the payload over into the response using memcpy().

Attackers can send Heartbeat requests with the value of the length field greater than the actual length of the payload. OpenSSL processes in the machine that are responding to Heartbeat requests don’t verify if the payload size is same as what is specified in length field. Thus, the machine copies extra data residing in memory after the payload into the response. This is how the Heartbleed vulnerability works. Therefore, the extra bytes are additional data in the remote process’s memory" (Gajawada, 2016).

It was rated an 11 level vulnerability on a scale from 1 to 10 by Security expert Bruce Schneier (Arneja & DelGrosso, 2014).

What effect did the Heartbleed bug have on information security practices within enterprise organizations?

For enterprise:
1. Perimeter defenses are not enough, security to be built into software (Arneja & DelGrosso, 2014).
2. “Code that an enterprise did not write itself can put the organization in a serious risk” (McGraw, 2014)
3. “Computer security problems are almost always caused by bad software” (McGraw, 2014)
4. “Open source is not more secure” (McGraw, 2014)
5. Patching should be a requirement and not recommended. Elimination of generalized risk ratings.


For software companies:
1. Make user ids and passwords difficult to identify in the memory dump (Arneja & DelGrosso, 2014)
2. Integrating security-related activities during software creation (Gajawada, 2016).
3. Thorough reviews of open source components (Gajawada, 2016).
4. Never trust input from an external system (Gajawada, 2016).
5. Always perform server-side input validation (Gajawada, 2016).

Is this only an issue with OpenSSL?

It should be noted that it’s not just an issue with OpenSSL, but many cryptographic libraries, misuse, and the documentation. For example, LibreSSL intended to fix many of the problems in OpenSSL and researchers discovered problems in the random number generator:

“The problem resides in the pseudo random number generator (PRNG) that LibreSSL relies on to create keys that can't be guessed even when an attacker uses extremely fast computers. When done correctly, the pool of numbers supplied is so vast that the output will almost never be repeated in subsequent requests, and there should be no way for adversaries to accurately predict which numbers are more likely than others to be chosen. Generators that don't produce an extremely large pool of truly random numbers can undermine an otherwise robust encryption scheme. The Dual EC_DRBG influenced by the National Security Agency and used by default in RSA's BSAFE toolkit, for instance, is reportedly so predictable that it can undermine the security of applications that rely on it” (Goodin, 2014).

As fuzzing tools get better, more of these alternative cryptographic libraries have vulnerabilities discovered. MatrixSSL was fuzzed with a Google tool by a renowed security expert where they discovered the following within minutes:

“ Within a matter of minutes, the testing produced results indicating memory safety issues, and before long, I had identified three distinct problems, including exploitable heap buffer overflow. CVE-2016-6890 has been assigned to the buffer overflow, CVE-2016-6891 has been assigned to a buffer over-read, and CVE-2016-6892 is assigned to an improper free in x509FreeExtensions(). " (Young, 2016).

In the case study titled “Why does cryptographic software fail?”, researchers concluded that 83% of bugs in applications misuse cryptographic libraries and 17% of the bugs are in cryptographic libraries themselves. This shows this is not only an issue of the integrity of open source software but the skill level to implement libraries correctly.


References
Arneja, V., DelGrosso, J. [Arxan]. (2014, May 15). Lessons from heartbleed; Strengthening Application Security on Busted Technology Architectures_ss. [Slideshare Slides]. Retrieved from https://www.slideshare.net/Arxan/lessons-from-heartbleed-strengthening-application-security-on-busted-technology-architectures-ss

Brodkin, J. (2014, April 24). Tech giants, chastened by Heartbleed, finally agree to fund OpenSSL. Retrieved from https://arstechnica.com/information-technology/2014/04/tech-giants-chastened-by-heartbleed-finally-agree-to-fund-openssl/

Fruhlinger, J. (2017, September 13). What is the Heartbleed bug, how does it work and how was it fixed? Retrieved from https://www.csoonline.com/article/3223203/vulnerabilities/what-is-the-heartbleed-bug-how-does-it-work-and-how-was-it-fixed.html

Gajawada, A. (2016, September 6). Software Integrity. Retrieved from https://www.synopsys.com/blogs/software-security/heartbleed-bug/

Goodin, D. (2014, July 15). Only a few days old, OpenSSL fork LibreSSL is declared “unsafe for Linux”. Retrieved from https://arstechnica.com/information-technology/2014/07/only-a-few-days-old-openssl-fork-libressl-is-declared-unsafe-for-linux/

Lazar, D., Chen, Haogang, C., Wang, X., Zeldovich, N. Why does cryptographic software fail? A case study and open problems. Retrieved from https://people.csail.mit.edu/nickolai/papers/lazar-cryptobugs.pdf

McGraw, G. (2014, April). McGraw on Heartbleed shock and awe: What are the real lessons? Retrieved from https://searchsecurity.techtarget.com/opinion/McGraw-on-Heartbleed-shock-and-awe-What-are-the-real-lessons

OpenSSl.org. (n.d.). Home Page. Retrieved on July 30, 2018 from Openssl.org

Synopsys. (2014, April 29). The Heartbleed Bug. Retrieved from http://heartbleed.com/

Young, C. (2016, October 10). Flawed MatrixSSL Code Highlights Need for Better IoT Update Practices. Retrieved from https://www.tripwire.com/state-of-security/security-data-protection/cyber-security/flawed-matrixssl-code-highlights-need-for-better-iot-update-practices/
 

 
 

Learning from OpenSSL

The major flaw in OpenSSL was attributed to a small programming error made by a PhD student.

“The offending code was submitted just before New Year in 2012, by Robin Seggelmann, a PhD student no longer attached to the project. Speaking to The Guardian, he said, ‘I am responsible for the error, because I wrote the code and missed the necessary validation by an oversight. Unfortunately, this mistake also slipped through the review process and therefore made its way into the released version’” (Waugh, April 11).

To prevent the next Heartbleed bug, David Wheeler a Security Expert provides his opinion:

Preconditions by cost order (cheapest first):

1. Simplify the code: Many of the static techniques for countering Heartbleed-like defects, including manual review, were thwarted because the OpenSSL code is just too complex. Code that is security-sensitive needs to be “as simple as possible.

2. Simplify the Application Program Interface (API): Although it is slightly out-of-scope for this paper, a related problem is that application program interfaces (APIs) are often absurdly complex or difficult to use.

3. Allocate and deallocate memory normally: Secure programs should allocate and deallocate memory normally, without special program-specific allocation systems or memory caching systems. At the very least it should easy to disable them, and testing should ensure that disabling them works. Some of the techniques for mitigating the effects of Heartbleed appear to have been thwarted because of the way that OpenSSL allocated memory.

4. Use a standard FLOSS license: This is speculative on my part, but I believe that much more code review and many more contributions would occur if OpenSSL used a standard widely-used license.

Caveats
1. “Do not use just one of these tools and techniques to develop secure software. Developing secure software requires a collection of approaches, starting with knowing how to develop secure software in the first place”

2. There is no one master list of the types of tools and techniques that exist.

3. This is certainly not a complete list of ways that Heartbleed could have been detected ahead-of-time, as I mentioned above. I do hope it helps; further suggestions would be welcome!

From the above we can see how important it is for network visualization and data security. What the above provides is a blue print for a fail chain effect if bad actors get inside a software company. 

Reference
Waugh, R. (2014, April 11). “I am responsible”: Heartbleed developer breaks silence. Retrieved from https://www.welivesecurity.com/2014/04/11/i-am-responsible-heartbleed-developer-breaks-silence/

Wheeler, D.A. (2017, January 29). How to Prevent the next Heartbleed. Retrieved from https://www.dwheeler.com/essays/heartbleed.html