Code Review with a Tool

20 Slides101.88 KB

Code Review with a Tool

Application of Touchpoints External Review 3. Penetration Testing 6. Security Requirements 2. Risk Analysis 5. Abuse cases Requirement and Use cases Architecture and Design 1. Code Review 4. Risk-Based (Tools) Security Tests 2. Risk Analysis Test Plans Code Tests and Test Results 7. Security Operations Feedback from the Field 2

Code Review (Tool) Artifact: Code Implementation bugs Static Analysis tools White Hat activity 3

Software Bugs Programming bugs: Compiler catches error, developer corrects bug, continue development Security relevant bug: May be dormant for years Potentially higher cost than programming error Who should be responsible for security bug? Software developer? Security expert? 4

Implementation bugs Code review’s focus is on implementation bugs Essentially those that static analysis can find Security bugs are real problems – but architectural flaws are just as big a problem Code review can capture only half of the problems E.g. Buffer overflow bug in a particular line of code Architectural problems are very difficult to find by looking at the code Specially true for today’s large software

Manual vs. Automated Code Review Manual Code Review Tedious, error prone, exhausting Need expert with the mindset of an attacker! Static analysis tools Identify many common coding problems Faster than manual Need developer with basic understanding of security problems and how to fix detected ones 6

Best Practices Peer Code Review recommendations from SmartBear Software Based on Cisco code review study Over 6000 programmers and 100 companies “lessons learned” results Light weight code review 7

Best Practices Recommendations 1. 1. Review fewer that 200-400 lines 2. Aim for an inspection rate of less than 300-500 line of code/hour 3. Faster is not better! Based on number of detected vulnerabilities Do not spend more than 60-90 mins on review at a time 4. Optimizes number of detected vulnerabilities (70-90 %) Efficiency drops after about an hour of intense work Make developers annotate their code Encourage developers to “double-check” their work Reduce the number of vulnerabilities in the code 8

Best Practices Recommendations 2. 5. Establish quantifiable goals for code review 6. Maintain checklist 7. Prevent omissions of important security components Verify that defects are actually fixed 8. External metrics: e.g., reduced # of support calls Internal metrics: e.g., defect rate Need good collaborative review of software Managers must support code review Support team building and acceptance of process 9

Best Practices Recommendations 3. 9. Beware of the “Big Brother” effect 10. The Ego effect 11. Use of metrics – role of manager User code review to encourage developers for good coding habits Review at least 20-33% of code Light weight style of review Tool assisted Just as efficient as formal, heavy weight review but 1/5 less time required 10

Source Code vs. Binary Code Check What to check? Source Code or Binary Code? Source Code: See the logic, control, and data flow See explicit code lines Fixes can be carried out on the source code Compiled Code: May need reverse engineering (disassemble, decompile) Finding a few vulnerabilities is easy. Finding all is difficult Fixes may be incorporated as binary modules or external filters 11

How Static Analysis Works? Look for fixed set of patterns or rules Syntactic matches Lexical analysis Flow analysis (control flow, call chains, data flow) False negatives(wrong sense of security) A sound tool does not generate false negatives False positives Sound tool: given a set of assumptions, the static analysis tool does not produce false negatives Commercial tools: unsound 12

Static Analysis Identify vulnerable constructs Similar to compiler – preprocess source file and evaluates against known vulnerabilities Scope of analysis Local: one function at a time Module-level: one class (or compilation unit) at a time – incorporates relationships between functions Global: entire program – all relationships between functions 13

Rule Coverage Taxonomy of coding errors: Language specific (e.g., C/C , Java, etc.) Functions or APIs 14

Taxonomy of coding errors Taxonomy of coding errors Input validation and representation Some source of problems Metacharacters, alternate encodings, numeric representations Forgetting input validation Trusting input too much Example: buffer overflow; integer overflow API abuse API represents contract between caller and callee E.g., failure to enforce principle of least privilege Security features Getting right security features is difficult E.g., insecure randomness, password management, authentication, access control, cryptography, privilege management, etc.

Taxonomy of coding errors Taxonomy of coding errors Time and state Typical race condition issues E.g., deadlock Error handling Security defects related to error handling are very common Two ways Forget to handle errors or handling them roughly Produce errors that either give out way too much information or so dangerous no one wants to handle them E.g., unchecked error value; empty catch block

Taxonomy of coding errors Taxonomy of coding errors Code quality Encapsulation Poor code quality leads to unpredictable behavior Poor usability Allows attacker to stress the system in unexpected ways E.g., Double free; memory leak Object oriented approach Include boundaries E.g., comparing classes by name Environment Everything outside of the code but is important for the security of the software E.g., password in configuration file (hardwired)

Commercial Tools Easy to use – still need expert knowledge Can process large code (millions of lines) efficiently Need specialization of reviewer of results Encapsulates knowledge (known vulnerabilities) and efficient flow analysis Encourages efficient and secure coding 18

Tool Characteristics Be designed for security Support multiple tires Be extendable Be useful for both security analyst and developer Support existing development process Make sense for multiple stakeholders 19

Questions? 20

Back to top button