Safe Coding

Image removed.


Coding standards encourage programmers to follow a set of uniform rules and guidelines determined by the requirements of the project and organization, rather than by the programmer's personal familiarity or preference. Developers and software designers apply these coding standards during software development to create secure systems. The development of secure coding standards is a work in progress by security researchers, language experts, and software developers. The articles cited here cover topics such as software entropy, traceability, embedded systems, and reliability.
 

  • Suvrojit Das, Debayan Chatterjee, D. Ghosh, Narayan C. Debnath, “Extracting the System Call Identifier From Within VFS: A Kernel Stack Parsing-Based Approach,” International Journal of Information and Computer Security Volume 6 Issue 1, March 2014, (Pages 12-50). (ID#:14-1423) Available at: http://dl.acm.org/citation.cfm?id=2597545.2597547&coll=DL&dl=GUIDE&CFID=341292836&CFTOKEN=35935271 This paper addresses the extraction of system call information from the VFS layer of the Linux kernel. The authors propose a system call identifier method, with view to bolster file timestamp metadata logs. Keywords: (not available).
  • Aggarwal, P.K.; Dharmendra; Jain, P.; Verma, T., "Adaptive approach for Information Hiding in WWW pages," Issues and Challenges in Intelligent Computing Techniques (ICICT), 2014 International Conference on , vol., no., pp.113,118, 7-8 Feb. 2014. (ID#:14-1424) Available at: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6781262&isnumber=6781240 This paper provides a new horizon for safe communication through Information Hiding on Internet. WWW pages steganography is a solution which makes it possible to send data without being altered and intercepted and traced back to sender. Various steganographic techniques have been designed till now which ensures the Integrity and Confidentiality of data maintained in HTML documents. The technique proposed in this research paper is based on the line of the source code of the HTML web pages. This technique does not affect the content of source code and hides the data in the line of the source code without affecting the originality of the source code and the web page. keywords: Internet; data encapsulation; hypermedia markup languages; steganography; HTML document; Internet; WWW pages; adaptive approach; information hiding; steganographic technique; Cryptography; HTML; Head; Ice; Indexes; Embed data; HTML tags; HTML web page; Steganography
  • Richard Baskerville, Paolo Spagnoletti, Jongwoo Kim, " Incident-Centered Information Security: Managing A Strategic Balance Between Prevention And Response," Information and Management, Volume 51 Issue 1, January, 2014, (Pages 138-151). (ID#:14-1425) Available at: http://dl.acm.org/citation.cfm?id=2566268.2566362&coll=DL&dl=GUIDE&CFID=341292836&CFTOKEN=35935271 This paper highlights the importance of achieving balance between information security response and prevention paradigms, which have historically been pitted one over the other as being superior. This paper offers a broad security framework centered on balance between the importance of both paradigms. Case study and results are discussed. Keywords: Case study, Incident-centered analysis, Information security management, Prevention paradigm, Response paradigm, Security balance
  • Traci J. Hess, Anna L. McNab, K. Asli Basoglu, “Reliability Generalization Of Perceived Ease Of Use, Perceived Usefulness, And Behavioral Intentions,” MIS Quarterly,Volume 38 Issue 1, March 2014, (Pages 1-1). (ID#:14-1426) Available at: http://dl.acm.org/citation.cfm?id=2600518.2600520&coll=DL&dl=GUIDE&CFID=341292836&CFTOKEN=35935271 This paper details a reliability generalization study conducted on the perceived ease of use, perceived usefulness, and behavioral intentions from a technology acceptance model (TAM). To conduct this study, 380 articles were reviewed and used to perform reliability generalization, resulting in the discovery of differences in reliability coefficients for the aforementioned ease of use, usefulness, and behavioral intentions that make up the three technology acceptance constructs. Keywords: behavioral intentions, ease of use, effect size attenuation, meta-analysis, reliability, reliability generalization, technology acceptance model (TAM), usefulness
  • Philip Axer, Rolf Ernst, Heiko Falk, Alain Girault, Daniel Grund, Nan Guan, Bengt Jonsson, Peter Marwedel, Jan Reineke, Christine Rochange, Maurice Sebastian, Reinhard Von Hanxleden, Reinhard Wilhelm, Wang Yi, “Building Timing Predictable Embedded Systems,” ACM Transactions on Embedded Computing Systems (TECS), Volume 13 Issue 4, February 2014 Issue-in-Progress, Article No. 82. (ID#:14-1428) Available at: http://dl.acm.org/citation.cfm?id=2592905.2560033&coll=DL&dl=GUIDE&CFID=341292836&CFTOKEN=35935271 This paper discusses current research on building performant predictable systems. Predictability concerns, in embedded system design, language-based programming approaches for predictable timing, and multicore predictability are discussed. Randomly occurring errors are taken into consideration when the authors discuss predictability in network embedded systems. Keywords: Embedded systems, predictability, resource sharing, safety-critical systems, timing analysis
  • Carol Smidts, Chetan Mutha, Manuel Rodríguez, Matthew J. Gerber, “Software Testing With an Operational Profile: OP Definition,” ACM Computing Surveys (CSUR),Volume 46 Issue 3, January 2014 Article No. 39. (ID#:14-1429) Available at: http://dl.acm.org/citation.cfm?id=2578702.2518106&coll=DL&dl=GUIDE&CFID=341292836&CFTOKEN=35935271 This article is devoted to the survey, analysis, and classification of operational profiles (OP) that characterize the type and frequency of software inputs and are used in software testing techniques. The survey follows a mixed method based on systematic maps and qualitative analysis. This article is articulated around a main dimension, that is, OP classes, which are a characterization of the OP model and the basis for generating test cases. The classes are organized as a taxonomy composed of common OP features (e.g., profiles, structure, and scenarios), software boundaries (which define the scope of the OP), OP dependencies (such as those of the code or in the field of interest), and OP development (which specifies when and how an OP is developed). To facilitate understanding of the relationships between OP classes and their elements, a meta-model was developed that can be used to support OP standardization. Many open research questions related to OP definition and development are identified based on the survey and classification. Keywords: Software testing, operational profile, software reliability, taxonomy, usage models
  • Jitender Choudhari, Ugrasen Suman, “Extended Iterative Maintenance Life Cycle using eXtreme Programming ,” ACM SIGSOFT Software Engineering Notes, Volume 39 Issue 1, January 2014, (Pages 1-12). (ID#:14-1430) Available at: http://dl.acm.org/citation.cfm?id=2557833.2557845&coll=DL&dl=GUIDE&CFID=341292836&CFTOKEN=35935271 Software maintenance is the continuous process of enhancing the operational life of software. The existing approaches to software maintenance, derived from the traditional approaches to development, are unable to resolve the problems of unstructured code, team morale, poor visibility of the project, lack of communication, and lack of proper test suites. Alternatively, extreme programming practices such as test driven development, refactoring, pair programming, continuous integration, small releases, and collective ownership help to resolve the aforesaid problems. In this paper, a process model is proposed for software maintenance using extreme programming practices to resolve maintenance issues in an improved manner. The proposed approach speeds up the maintenance process and produces more maintainable code with less effort for future maintenance and evolution. The proposed model is validated by applying it on several maintenance projects in an academic environment. It has been observed that the approach provides higher quality code. The proposed model based on extreme programming enhances both learning and productivity of the team by improving the morale, courage, and confidence of the team, which supports higher motivation during maintenance. Keywords: extreme programming, software maintenance, safe coding
  • Abdallah Qusef, Gabriele Bavota, Rocco Oliveto, Andrea De Lucia, Dave Binkley, “Recovering Test-To-Code Traceability Using Slicing And Textual Analysis,” Journal of Systems and Software, Volume 88, February, 2014, (Pages 147-168). (ID#:14-1431) Available at: http://dl.acm.org/citation.cfm?id=2565887.2566083&coll=DL&dl=GUIDE&CFID=341292836&CFTOKEN=35935271 Test suites are a valuable source of up-to-date documentation as developers continuously modify them to reflect changes in the production code and preserve an effective regression suite. While maintaining traceability links between unit test and the classes under test can be useful to selectively retest code after a change, the value of having traceability links goes far beyond this potential savings. One key use is to help developers better comprehend the dependencies between tests and classes and help maintain consistency during refactoring. Despite its importance, test-to-code traceability is not common in software development and, when needed, traceability information has to be recovered during software development and evolution. We propose an advanced approach, named SCOTCH+ (Source code and COncept based Test to Code traceability Hunter), to support the developer during the identification of links between unit tests and tested classes. Given a test class, represented by a JUnit class, the approach first exploits dynamic slicing to identify a set of candidate tested classes. Then, external and internal textual information associated with the classes retrieved by slicing is analyzed to refine this set of classes and identify the final set of candidate tested classes. The external information is derived from the analysis of the class name, while internal information is derived from identifiers and comments. The approach is evaluated on five software systems. The results indicate that the accuracy of the proposed approach far exceeds the leading techniques found in the literature. Keywords: Dynamic slicing, Information retrieval, Test-to-code traceability
  • Daniel Perelman, Sumit Gulwani, Dan Grossman, Peter Provost, “Test-driven Synthesis,” PLDI '14 Proceedings of the 35th ACM SIGPLAN Conference on Programming Language Design and Implementation, June 2014, (Pages 408-418). (ID#:14-1432) Available at: http://dl.acm.org/citation.cfm?id=2594291.2594297&coll=DL&dl=GUIDE&CFID=341292836&CFTOKEN=35935271 Programming-by-example technologies empower end-users to create simple programs merely by providing input/output examples. Existing systems are designed around solvers specialized for a specific set of data types or domain-specific language (DSL). We present a program synthesizer which can be parameterized by an arbitrary DSL that may contain conditionals and loops and therefore is able to synthesize programs in any domain. In order to use our synthesizer, the user provides a sequence of increasingly sophisticated input/output examples along with an expert-written DSL definition. These two inputs correspond to the two key ideas that allow our synthesizer to work in arbitrary domains. First, we developed a novel iterative synthesis technique inspired by test-driven development---which also gives our technique the name of test-driven synthesis---where the input/output examples are consumed one at a time as the program is refined. Second, the DSL allows our system to take an efficient component-based approach to enumerating possible programs. We present applications of our synthesis methodology to end-user programming for transformations over strings, XML, and table layouts. We compare our synthesizer on these applications to state-of-the-art DSL-specific synthesizers as well to the general purpose synthesizer Sketch. Keywords: end-user programming, program synthesis, test driven development
  • Luke Stark, Matt Tierney, “Lockbox: Mobility, Privacy and Values in Cloud Storage,” Ethics and Information Technology, Volume 16 Issue 1, March 2014, (Pages 1-13). (ID#:14-1433) Available at: http://dl.acm.org/citation.cfm?id=2597586.2597601&coll=DL&dl=GUIDE&CFID=341292836&CFTOKEN=35935271 This paper examines one particular problem of values in cloud computing: how individuals can take advantage of the cloud to store data without compromising their privacy and autonomy. Through the creation of Lockbox, an encrypted cloud storage application, we explore how designers can use reflection in designing for human values to maintain both privacy and usability in the cloud. Keywords: Autonomy, Cloud computing, Cryptography, Human---Computer interaction (HCI), Mobility, Privacy, Reflective Design, Usability, User Empowerment, Values and Design
  • Gerardo Canfora, Luigi Cerulo, Marta Cimitile, Massimiliano Di Penta, “How Changes Affect Software Entropy: An Empirical Study,” Empirical Software Engineering, Volume 19 Issue 1, February 2014 (Pages 1-38). (ID#:14-1434) Available at: http://dl.acm.org/citation.cfm?id=2578395.2578409&coll=DL&dl=GUIDE&CFID=341292836&CFTOKEN=35935271 Software systems continuously change for various reasons, such as adding new features, fixing bugs, or refactoring. Changes may either increase the source code complexity and disorganization, or help to reducing it. Aim This paper empirically investigates the relationship of source code complexity and disorganization--measured using source code change entropy--with four factors, namely the presence of refactoring activities, the number of developers working on a source code file, the participation of classes in design patterns, and the different kinds of changes occurring on the system, classified in terms of their topics extracted from commit notes. We carried out an exploratory study on an interval of the life-time span of four open source systems, namely ArgoUML, Eclipse-JDT, Mozilla, and Samba, with the aim of analyzing the relationship between the source code change entropy and four factors: refactoring activities, number of contributors for a file, participation of classes in design patterns, and change topics. Results The study shows that (i) the change entropy decreases after refactoring, (ii) files changed by a higher number of developers tend to exhibit a higher change entropy than others, (iii) classes participating in certain design patterns exhibit a higher change entropy than others, and (iv) changes related to different topics exhibit different change entropy, for example bug fixings exhibit a limited change entropy while changes introducing new features exhibit a high change entropy. Conclusions Results provided in this paper indicate that the nature of changes (in particular changes related to refactorings), the software design, and the number of active developers are factors related to change entropy. Our findings contribute to understand the software aging phenomenon and are preliminary to identifying better ways to contrast it. Keywords: Mining software repositories, Software complexity, Software entropy
  • Christos Margiolas, Michael F. P. O'Boyle, “Portable and Transparent Host-Device Communication Optimization for GPGPU Environments,” Proceedings of Annual IEEE/ACM International Symposium on Code Generation and Optimization, February 2014. (ID#:14-1435) Available at: http://dl.acm.org/citation.cfm?id=2581122.2544156&coll=DL&dl=GUIDE&CFID=341292836&CFTOKEN=35935271 General purpose graphics processors units (GPU) provide the potential for high computational performance with reduced cost and power. Typically they are employed in heterogeneous settings acting as accelerators. Here an application resides on a host multi-core, dispatching work to the GPU. However, workload dispatch is frequently accompanied by large scale data transfers between the host main memory and the dedicated memories of the GPUs. For many applications, memory allocation and communication overhead can severely reduce the benefits of GPU acceleration. This paper develops an approach that reduces host-device communication overhead for OpenCL applications. It does this without modification or recompilation of the application source code and is portable across platforms. It achieves this by tracing and analyzing calls to the runtime made by the application and then selecting the best platform specific memory allocation and communication policy. This approach was applied to 12 existing OpenCL benchmarks from Parboil and Rodinia suites on 3 different platforms where it gives on average a speedup of 1.51, 1.31 and 1.48, respectively. In certain cases, our approach leads up to a factor of three times improvement over current approaches. Keywords: GPU, OpenCL, communication optimization, heterogeneous computing, profiling, runtime, tracing

Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to SoS.Project (at) SecureDataBank.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.