1.Fingerprinting and Building Large Reproducible Datasets

Authors:Romain Lefeuvre, Jessie Galasso, Benoit Combemale, Houari Sahraoui, Stefano Zacchiroli

Abstract: Obtaining a relevant dataset is central to conducting empirical studies in software engineering. However, in the context of mining software repositories, the lack of appropriate tooling for large scale mining tasks hinders the creation of new datasets. Moreover, limitations related to data sources that change over time (e.g., code bases) and the lack of documentation of extraction processes make it difficult to reproduce datasets over time. This threatens the quality and reproducibility of empirical studies. In this paper, we propose a tool-supported approach facilitating the creation of large tailored datasets while ensuring their reproducibility. We leveraged all the sources feeding the Software Heritage append-only archive which are accessible through a unified programming interface to outline a reproducible and generic extraction process. We propose a way to define a unique fingerprint to characterize a dataset which, when provided to the extraction process, ensures that the same dataset will be extracted. We demonstrate the feasibility of our approach by implementing a prototype. We show how it can help reduce the limitations researchers face when creating or reproducing datasets.

2.Transparency in App Analytics: Analyzing the Collection of User Interaction Data

Authors:Feiyang Tang, Bjarte M. Østvold

Abstract: The rise of mobile apps has brought greater convenience and many options for users. However, many apps use analytics services to collect a wide range of user interaction data, with privacy policies often failing to reveal the types of interaction data collected or the extent of the data collection practices. This lack of transparency potentially breaches data protection laws and also undermines user trust. We conducted an analysis of the top 20 analytic libraries for Android apps to identify common practices of interaction data collection and used this information to develop a standardized collection claim template for summarizing an app's data collection practices wrt. user interaction data. We selected the top 100 apps from popular categories on Google Play and used automatic static analysis to extract collection evidence from their data collection implementations. Our analysis found that a significant majority of these apps actively collected interaction data from UI types such as View (89%), Button (76%), and Textfield (63%), highlighting the pervasiveness of user interaction data collection. By comparing the collection evidence to the claims derived from privacy policy analysis, we manually fact-checked the completeness and accuracy of these claims for the top 10 apps. We found that, except for one app, they all failed to declare all types of interaction data they collect and did not specify some of the collection techniques used.

3.Helping Code Reviewer Prioritize: Pinpointing Personal Data and its Processing

Authors:Feiyang Tang, Bjarte M. Østvold, Magiel Bruntink

Abstract: Ensuring compliance with the General Data Protection Regulation (GDPR) is a crucial aspect of software development. This task, due to its time-consuming nature and requirement for specialized knowledge, is often deferred or delegated to specialized code reviewers. These reviewers, particularly when external to the development organization, may lack detailed knowledge of the software under review, necessitating the prioritization of their resources. To address this, we have designed two specialized views of a codebase to help code reviewers in prioritizing their work related to personal data: one view displays the types of personal data representation, while the other provides an abstract depiction of personal data processing, complemented by an optional detailed exploration of specific code snippets. Leveraging static analysis, our method identifies personal data-related code segments, thereby expediting the review process. Our approach, evaluated on four open-source GitHub applications, demonstrated a precision rate of 0.87 in identifying personal data flows. Additionally, we fact-checked the privacy statements of 15 Android applications. This solution, designed to augment the efficiency of GDPR-related privacy analysis tasks such as the Record of Processing Activities (ROPA), aims to conserve resources, thereby saving time and enhancing productivity for code reviewers.

4.Software Engineers' Questions and Answers on Stack Exchange

Authors:Matúš Sulír, Marcel Regeci

Abstract: There exists a large number of research works analyzing questions and answers on the popular Stack Overflow website. However, other sub-sites of the Stack Exchange platform are studied rarely. In this paper, we analyze the questions and answers on the Software Engineering Stack Exchange site that encompasses a broader set of areas, such as testing or software processes. Topics and quantities of the questions, historical trends, and the authors' sentiment were analyzed using downloaded datasets. We found that the asked questions are most frequently related to database systems, quality assurance, and agile software development. The most attractive topics were career and teamwork problems, and the least attractive ones were network programming and software modeling. Historically, the topic of domain-driven design recorded the highest rise, and jobs and career the most significant fall. The number of new questions dropped, while the portion of unanswered ones increased.

5.Designing Explainable Predictive Machine Learning Artifacts: Methodology and Practical Demonstration

Authors:Giacomo Welsch, Peter Kowalczyk

Abstract: Prediction-oriented machine learning is becoming increasingly valuable to organizations, as it may drive applications in crucial business areas. However, decision-makers from companies across various industries are still largely reluctant to employ applications based on modern machine learning algorithms. We ascribe this issue to the widely held view on advanced machine learning algorithms as "black boxes" whose complexity does not allow for uncovering the factors that drive the output of a corresponding system. To contribute to overcome this adoption barrier, we argue that research in information systems should devote more attention to the design of prototypical prediction-oriented machine learning applications (i.e., artifacts) whose predictions can be explained to human decision-makers. However, despite the recent emergence of a variety of tools that facilitate the development of such artifacts, there has so far been little research on their development. We attribute this research gap to the lack of methodological guidance to support the creation of these artifacts. For this reason, we develop a methodology which unifies methodological knowledge from design science research and predictive analytics with state-of-the-art approaches to explainable artificial intelligence. Moreover, we showcase the methodology using the example of price prediction in the sharing economy (i.e., on Airbnb).

6.A Survey on Automated Software Vulnerability Detection Using Machine Learning and Deep Learning

Authors:Nima Shiri Harzevili Jack, Alvine Boaye Belle Jack, Junjie Wang Jack, Song Wang Jack, Zhen Ming Jack, Jiang, Nachiappan Nagappan

Abstract: Software vulnerability detection is critical in software security because it identifies potential bugs in software systems, enabling immediate remediation and mitigation measures to be implemented before they may be exploited. Automatic vulnerability identification is important because it can evaluate large codebases more efficiently than manual code auditing. Many Machine Learning (ML) and Deep Learning (DL) based models for detecting vulnerabilities in source code have been presented in recent years. However, a survey that summarises, classifies, and analyses the application of ML/DL models for vulnerability detection is missing. It may be difficult to discover gaps in existing research and potential for future improvement without a comprehensive survey. This could result in essential areas of research being overlooked or under-represented, leading to a skewed understanding of the state of the art in vulnerability detection. This work address that gap by presenting a systematic survey to characterize various features of ML/DL-based source code level software vulnerability detection approaches via five primary research questions (RQs). Specifically, our RQ1 examines the trend of publications that leverage ML/DL for vulnerability detection, including the evolution of research and the distribution of publication venues. RQ2 describes vulnerability datasets used by existing ML/DL-based models, including their sources, types, and representations, as well as analyses of the embedding techniques used by these approaches. RQ3 explores the model architectures and design assumptions of ML/DL-based vulnerability detection approaches. RQ4 summarises the type and frequency of vulnerabilities that are covered by existing studies. Lastly, RQ5 presents a list of current challenges to be researched and an outline of a potential research roadmap that highlights crucial opportunities for future work.

7.Automated Grading and Feedback Tools for Programming Education: A Systematic Review

Authors:Marcus Messer, Neil C. C. Brown, Michael Kölling, Miaojing Shi

Abstract: We conducted a systematic literature review on automated grading and feedback tools for programming education. We analysed 121 research papers from 2017 to 2021 inclusive and categorised them based on skills assessed, grading approach, language paradigm, degree of automation and evaluation techniques. Most papers grade the correctness of object-oriented assignments. Typically, these tools use a dynamic technique, primarily unit testing, to provide grades and feedback to the students. However, compared to correctness grading, few tools assess readability, maintainability, or documentation, focusing solely on the presence of documentation, not documentation quality.

8.Towards a Definition of Complex Software System

Authors:Jan Žižka, Bruno Rossi, Tomáš Pitner

Abstract: Complex Systems were identified and studied in different fields, such as physics, biology, and economics. These systems exhibit exciting properties such as self-organization, robust order, and emergence. In recent years, software systems displaying behaviors associated with Complex Systems are starting to appear, and these behaviors are showing previously unknown potential (e.g., GPT-based applications). Yet, there is no commonly shared definition of a Complex Software System that can serve as a key reference for academia to support research in the area. In this paper, we adopt the theory-to-research strategy to extract properties of Complex Systems from research in other fields, mapping them to software systems to create a formal definition of a Complex Software System. We support the evolution of the properties through future validation, and we provide examples of the application of the definition. Overall, the definition will allow for a more precise, consistent, and rigorous frame of reference for conducting scientific research on software systems.

9.Outside the Sandbox: A Study of Input/Output Methods in Java

Authors:Matúš Sulír, Sergej Chodarev, Milan Nosáľ

Abstract: Programming languages often demarcate the internal sandbox, consisting of entities such as objects and variables, from the outside world, e.g., files or network. Although communication with the external world poses fundamental challenges for live programming, reversible debugging, testing, and program analysis in general, studies about this phenomenon are rare. In this paper, we present a preliminary empirical study about the prevalence of input/output (I/O) method usage in Java. We manually categorized 1435 native methods in a Java Standard Edition distribution into non-I/O and I/O-related methods, which were further classified into areas such as desktop or file-related ones. According to the static analysis of a call graph for 798 projects, about 57% of methods potentially call I/O natives. The results of dynamic analysis on 16 benchmarks showed that 21% of the executed methods directly or indirectly called an I/O native. We conclude that neglecting I/O is not a viable option for tool designers and suggest the integration of I/O-related metadata with source code to facilitate their querying.

10.Towards Understanding What Code Language Models Learned

Authors:Toufique Ahmed, Dian Yu, Chengxuan Huang, Cathy Wang, Prem Devanbu, Kenji Sagae

Abstract: Pre-trained language models are effective in a variety of natural language tasks, but it has been argued their capabilities fall short of fully learning meaning or understanding language. To understand the extent to which language models can learn some form of meaning, we investigate their ability to capture semantics of code beyond superficial frequency and co-occurrence. In contrast to previous research on probing models for linguistic features, we study pre-trained models in a setting that allows for objective and straightforward evaluation of a model's ability to learn semantics. In this paper, we examine whether such models capture the semantics of code, which is precisely and formally defined. Through experiments involving the manipulation of code fragments, we show that code pre-trained models of code learn a robust representation of the computational semantics of code that goes beyond superficial features of form alone