1.A Comparative Study of Code Generation using ChatGPT 3.5 across 10 Programming Languages

Authors:Alessio Buscemi

Abstract: Large Language Models (LLMs) are advanced Artificial Intelligence (AI) systems that have undergone extensive training using large datasets in order to understand and produce language that closely resembles that of humans. These models have reached a level of proficiency where they are capable of successfully completing university exams across several disciplines and generating functional code to handle novel problems. This research investigates the coding proficiency of ChatGPT 3.5, a LLM released by OpenAI in November 2022, which has gained significant recognition for its impressive text generating and code creation capabilities. The skill of the model in creating code snippets is evaluated across 10 various programming languages and 4 different software domains. Based on the findings derived from this research, major unexpected behaviors and limitations of the model have been identified. This study aims to identify potential areas for development and examine the ramifications of automated code generation on the evolution of programming languages and on the tech industry.

2.A Dataset and Analysis of Open-Source Machine Learning Products

Authors:Nadia Nahar, Haoran Zhang, Grace Lewis, Shurui Zhou, Christian Kästner

Abstract: Machine learning (ML) components are increasingly incorporated into software products, yet developers face challenges in transitioning from ML prototypes to products. Academic researchers struggle to propose solutions to these challenges and evaluate interventions because they often do not have access to close-sourced ML products from industry. In this study, we define and identify open-source ML products, curating a dataset of 262 repositories from GitHub, to facilitate further research and education. As a start, we explore six broad research questions related to different development activities and report 21 findings from a sample of 30 ML products from the dataset. Our findings reveal a variety of development practices and architectural decisions surrounding different types and uses of ML models that offer ample opportunities for future research innovations. We also find very little evidence of industry best practices such as model testing and pipeline automation within the open-source ML products, which leaves room for further investigation to understand its potential impact on the development and eventual end-user experience for the products.

3.Fair and Inclusive Participatory Budgeting: Voter Experience with Cumulative and Quadratic Voting Interfaces

Authors:Thomas Welling, Fatemeh Banaie Heravan, Abhinav Sharma, Lodewijk Gelauff, Regula Haenggli, Evangelos Pournaras

Abstract: Cumulative and quadratic voting are two distributional voting methods that are expressive, promoting fairness and inclusion, particularly in the realm of participatory budgeting. Despite these benefits, graphical voter interfaces for cumulative and quadratic voting are complex to implement and use effectively. As a result, such methods have not seen yet widespread adoption on digital voting platforms. This paper addresses the challenge by introducing an implementation and evaluation of cumulative and quadratic voting within a state-of-the-art voting platform: Stanford Participatory Budgeting. The findings of the study show that while voters prefer simple methods, the more expressive (and complex) cumulative voting becomes the preferred one compared to k-ranking voting that is simpler but less expressive. The implemented voting interface elements are found useful and support the observed voters' preferences for more expressive voting methods. *

4.The Inverse Transparency Toolchain: A Fully Integrated and Quickly Deployable Data Usage Logging Infrastructure

Authors:Valentin Zieglmeier

Abstract: Inverse transparency is created by making all usages of employee data visible to them. This requires tools that handle the logging and storage of usage information, and making logged data visible to data owners. For research and teaching contexts that integrate inverse transparency, creating this required infrastructure can be challenging. The Inverse Transparency Toolchain presents a flexible solution for such scenarios. It can be easily deployed and is tightly integrated. With it, we successfully handled use cases covering empirical studies with users, prototyping in university courses, and experimentation with our industry partner.