
Databases (cs.DB)
Thu, 10 Aug 2023
1.Building a serverless Data Lakehouse from spare parts
Authors:Jacopo Tagliabue, Ciro Greco, Luca Bigon
Abstract: The recently proposed Data Lakehouse architecture is built on open file formats, performance, and first-class support for data transformation, BI and data science: while the vision stresses the importance of lowering the barrier for data work, existing implementations often struggle to live up to user expectations. At Bauplan, we decided to build a new serverless platform to fulfill the Lakehouse vision. Since building from scratch is a challenge unfit for a startup, we started by re-using (sometimes unconventionally) existing projects, and then investing in improving the areas that would give us the highest marginal gains for the developer experience. In this work, we review user experience, high-level architecture and tooling decisions, and conclude by sharing plans for future development.
2.Co-movement Pattern Mining from Videos
Authors:Dongxiang Zhang, Teng Ma, Junnan Hu, Yijun Bei, Kian-Lee Tan, Gang Chen
Abstract: Co-movement pattern mining from GPS trajectories has been an intriguing subject in spatial-temporal data mining. In this paper, we extend this research line by migrating the data source from GPS sensors to surveillance cameras, and presenting the first investigation into co-movement pattern mining from videos. We formulate the new problem, re-define the spatial-temporal proximity constraints from cameras deployed in a road network, and theoretically prove its hardness. Due to the lack of readily applicable solutions, we adapt existing techniques and propose two competitive baselines using Apriori-based enumerator and CMC algorithm, respectively. As the principal technical contributions, we introduce a novel index called temporal-cluster suffix tree (TCS-tree), which performs two-level temporal clustering within each camera and constructs a suffix tree from the resulting clusters. Moreover, we present a sequence-ahead pruning framework based on TCS-tree, which allows for the simultaneous leverage of all pattern constraints to filter candidate paths. Finally, to reduce verification cost on the candidate paths, we propose a sliding-window based co-movement pattern enumeration strategy and a hashing-based dominance eliminator, both of which are effective in avoiding redundant operations. We conduct extensive experiments for scalability and effectiveness analysis. Our results validate the efficiency of the proposed index and mining algorithm, which runs remarkably faster than the two baseline methods. Additionally, we construct a video database with 1169 cameras and perform an end-to-end pipeline analysis to study the performance gap between GPS-driven and video-driven methods. Our results demonstrate that the derived patterns from the video-driven approach are similar to those derived from groundtruth trajectories, providing evidence of its effectiveness.
3.LLM As DBA
Authors:Xuanhe Zhou, Guoliang Li, Zhiyuan Liu
Abstract: Database administrators (DBAs) play a crucial role in managing, maintaining and optimizing a database system to ensure data availability, performance, and reliability. However, it is hard and tedious for DBAs to manage a large number of database instances (e.g., millions of instances on the cloud databases). Recently large language models (LLMs) have shown great potential to understand valuable documents and accordingly generate reasonable answers. Thus, we propose D-Bot, a LLM-based database administrator that can continuously acquire database maintenance experience from textual sources, and provide reasonable, well-founded, in-time diagnosis and optimization advice for target databases. This paper presents a revolutionary LLM-centric framework for database maintenance, including (i) database maintenance knowledge detection from documents and tools, (ii) tree of thought reasoning for root cause analysis, and (iii) collaborative diagnosis among multiple LLMs. Our preliminary experimental results that D-Bot can efficiently and effectively diagnose the root causes and our code is available at github.com/TsinghuaDatabaseGroup/DB-GPT.
4.Banzhaf Values for Facts in Query Answering
Authors:Omer Abramovich, Daniel Deutch, Nave Frost, Ahmet Kara, Dan Olteanu
Abstract: Quantifying the contribution of database facts to query answers has been studied as means of explanation. The Banzhaf value, originally developed in Game Theory, is a natural measure of fact contribution, yet its efficient computation for select-project-join-union queries is challenging. In this paper, we introduce three algorithms to compute the Banzhaf value of database facts: an exact algorithm, an anytime deterministic approximation algorithm with relative error guarantees, and an algorithm for ranking and top-$k$. They have three key building blocks: compilation of query lineage into an equivalent function that allows efficient Banzhaf value computation; dynamic programming computation of the Banzhaf values of variables in a Boolean function using the Banzhaf values for constituent functions; and a mechanism to compute efficiently lower and upper bounds on Banzhaf values for any positive DNF function. We complement the algorithms with a dichotomy for the Banzhaf-based ranking problem: given two facts, deciding whether the Banzhaf value of one is greater than of the other is tractable for hierarchical queries and intractable for non-hierarchical queries. We show experimentally that our algorithms significantly outperform exact and approximate algorithms from prior work, most times up to two orders of magnitude. Our algorithms can also cover challenging problem instances that are beyond reach for prior work.
5.The Fast and the Private: Task-based Dataset Search
Authors:Zezhou Huang, Jiaxiang Liu, Haonan Wang, Eugene Wu
Abstract: Recent dataset search platforms use ML task-based utility measures rather than metadata-based keywords, to search large dataset corpora. Requesters provide an initial dataset, and the platform seeks additional datasets that augment -- join or union -- requester's dataset to most improve the model (e.g., linear regression) performance. Although effective, current task-based data searches are stymied by (1) high latency which deters users, (2) privacy concerns for regulatory standards, and (3) low data quality which provides low utility. We introduce Mileena, a fast, private, and high-quality task-based dataset search platform. At its heart, Mileena is built on pre-computed semi-ring sketches for efficient ML training and evaluation. Based on semi-ring, we develop a novel Factorized Privacy Mechanism that makes the search differentially private and scales to arbitrary corpus sizes and numbers of requests without major quality degradation. We also demonstrate the early promise in using LLM-based agents for automatic data transformation and applying semi-rings to support causal discovery and treatment effect estimation.