1.Locating Community Smells in Software Development Processes Using Higher-Order Network Centralities

Authors:Christoph Gote, Vincenzo Perri, Christian Zingg, Giona Casiraghi, Carsten Arzig, Alexander von Gernler, Frank Schweitzer, Ingo Scholtes

Abstract: Community smells are negative patterns in software development teams' interactions that impede their ability to successfully create software. Examples are team members working in isolation, lack of communication and collaboration across departments or sub-teams, or areas of the codebase where only a few team members can work on. Current approaches aim to detect community smells by analysing static network representations of software teams' interaction structures. In doing so, they are insufficient to locate community smells within development processes. Extending beyond the capabilities of traditional social network analysis, we show that higher-order network models provide a robust means of revealing such hidden patterns and complex relationships. To this end, we develop a set of centrality measures based on the MOGen higher-order network model and show their effectiveness in predicting influential nodes using five empirical datasets. We then employ these measures for a comprehensive analysis of a product team at the German IT security company genua GmbH, showcasing our method's success in identifying and locating community smells. Specifically, we uncover critical community smells in two areas of the team's development process. Semi-structured interviews with five team members validate our findings: while the team was aware of one community smell and employed measures to address it, it was not aware of the second. This highlights the potential of our approach as a robust tool for identifying and addressing community smells in software development teams. More generally, our work contributes to the social network analysis field with a powerful set of higher-order network centralities that effectively capture community dynamics and indirect relationships.

2.Coverage Goal Selector for Combining Multiple Criteria in Search-Based Unit Test Generation

Authors:Zhichao Zhou, Yuming Zhou, Chunrong Fang, Zhenyu Chen, Xiapu Luo, Jingzhu He, Yutian Tang

Abstract: Unit testing is critical to the software development process, ensuring the correctness of basic programming units in a program (e.g., a method). Search-based software testing (SBST) is an automated approach to generating test cases. SBST generates test cases with genetic algorithms by specifying the coverage criterion (e.g., branch coverage). However, a good test suite must have different properties, which cannot be captured using an individual coverage criterion. Therefore, the state-of-the-art approach combines multiple criteria to generate test cases. Since combining multiple coverage criteria brings multiple objectives for optimization, it hurts the test suites' coverage for certain criteria compared with using the single criterion. To cope with this problem, we propose a novel approach named \textbf{smart selection}. Based on the coverage correlations among criteria and the subsumption relationships among coverage goals, smart selection selects a subset of coverage goals to reduce the number of optimization objectives and avoid missing any properties of all criteria. We conduct experiments to evaluate smart selection on $400$ Java classes with three state-of-the-art genetic algorithms under the $2$-minute budget. On average, smart selection outperforms combining all goals on $65.1\%$ of the classes having significant differences between the two approaches. Secondly, we conduct experiments to verify our assumptions about coverage criteria relationships. Furthermore, we experiment with different budgets of $5$, $8$, and $10$ minutes, confirming the advantage of smart selection over combining all goals.

3.Asynchronous Integration of Real-Time Simulators for HIL-based Validation of Smart Grids

Authors:Catalin Gavriluta, Georg Lauss, Thomas I. Strasser, Juan Montoya, Ron Brandl, Panos Kotsampopoulos

Abstract: As the landscape of devices that interact with the electrical grid expands, also the complexity of the scenarios that arise from these interactions increases. Validation methods and tools are typically domain specific and are designed to approach mainly component level testing. For this kind of applications, software and hardware-in-the-loop based simulations as well as lab experiments are all tools that allow testing with different degrees of accuracy at various stages in the development life-cycle. However, things are vastly different when analysing the tools and the methodology available for performing system-level validation. Until now there are no available well-defined approaches for testing complex use cases involving components from different domains. Smart grid applications would typically include a relatively large number of physical devices, software components, as well as communication technology, all working hand in hand. This paper explores the possibilities that are opened in terms of testing by the integration of a real-time simulator into co-simulation environments. Three practical implementations of such systems together with performance metrics are discussed. Two control-related examples are selected in order to show the capabilities of the proposed approach.

4.Towards a Systematic Approach for Smart Grid Hazard Analysis and Experiment Specification

Authors:Paul Smith, Eva Piatkowska, Edmund Widl, Filip Pröstl Andrén, Thomas I. Strasser

Abstract: The transition to the smart grid introduces complexity to the design and operation of electric power systems. This complexity has the potential to result in safety-related losses that are caused, for example, by unforeseen interactions between systems and cyber-attacks. Consequently, it is important to identify potential losses and their root causes, ideally during system design. This is non-trivial and requires a systematic approach. Furthermore, due to complexity, it may not possible to reason about the circumstances that could lead to a loss; in this case, experiments are required. In this work, we present how two complementary deductive approaches can be usefully integrated to address these concerns: Systems Theoretic Process Analysis (STPA) is a systems approach to identifying safety-related hazard scenarios; and the ERIGrid Holistic Test Description (HTD) provides a structured approach to refine and document experiments. The intention of combining these approaches is to enable a systematic approach to hazard analysis whose findings can be experimentally tested. We demonstrate the use of this approach with a reactive power voltage control case study for a low voltage distribution network.

5.WASM-MUTATE: Fast and Effective Binary Diversification for WebAssembly

Authors:Javier Cabrera-Arteaga, Nicholas Fitzgerald, Martin Monperrus, Benoit Baudry

Abstract: WebAssembly has is renowned for its efficiency and security in browser environments and servers alike. The burgeoning ecosystem of WebAssembly compilers and tools lacks robust software diversification systems. We introduce WASM-MUTATE, a compiler-agnostic WebAssembly diversification engine. It is engineered to fulfill the following key criteria: 1) the rapid generation of semantically equivalent yet behaviorally diverse WebAssembly variants, 2) universal applicability to any WebAssembly programs regardless of the source programming language, and 3) the capability to counter high-risk security threats. Utilizing an e-graph data structure, WASM-MUTATE is both fast and effective. Our experiments reveal that WASM-MUTATE can efficiently generate tens of thousands of unique WebAssembly variants in a matter of minutes. Notably, WASM-MUTATE can protect WebAssembly binaries against timing side-channel attacks, specifically, Spectre.

6.Pop Quiz! Do Pre-trained Code Models Possess Knowledge of Correct API Names?

Authors:Terry Yue Zhuo, Xiaoning Du, Zhenchang Xing, Jiamou Sun, Haowei Quan, Li Li, Liming Zhu

Abstract: Recent breakthroughs in pre-trained code models, such as CodeBERT and Codex, have shown their superior performance in various downstream tasks. The correctness and unambiguity of API usage among these code models are crucial for achieving desirable program functionalities, requiring them to learn various API fully qualified names structurally and semantically. Recent studies reveal that even state-of-the-art pre-trained code models struggle with suggesting the correct APIs during code generation. However, the reasons for such poor API usage performance are barely investigated. To address this challenge, we propose using knowledge probing as a means of interpreting code models, which uses cloze-style tests to measure the knowledge stored in models. Our comprehensive study examines a code model's capability of understanding API fully qualified names from two different perspectives: API call and API import. Specifically, we reveal that current code models struggle with understanding API names, with pre-training strategies significantly affecting the quality of API name learning. We demonstrate that natural language context can assist code models in locating Python API names and generalize Python API name knowledge to unseen data. Our findings provide insights into the limitations and capabilities of current pre-trained code models, and suggest that incorporating API structure into the pre-training process can improve automated API usage and code representations. This work provides significance for advancing code intelligence practices and direction for future studies. All experiment results, data and source code used in this work are available at \url{https://doi.org/10.5281/zenodo.7902072}.