|Peter Reimann: Metadata Modeling and Use of Domain Knowledge to Support Industrial Data Analytics
Industrial Analytics refers to problems and solution approaches for data management, data provision and data analytics across different phases of the industrial product life cycle. In this presentation, an overview of the research topics of the junior research group “ICT Platform for Manufacturing” at the Graduate School of Excellence advanced Manufacturing Engineering (GSaME) of the University of Stuttgart is given. This research group deals with both application-oriented and fundamental research in the area of Industrial Analytics. The talk will detail on two specific research topics and corresponding project results. Firstly, an approach to metadata modeling will be presented that connects heterogeneous data from various previously isolated data sources in virtual product development and corresponding CAx systems. Thereby, the work activities of product development projects and the data these activities consume and produce are explicitly represented in the metadata. This facilitates a democratized data access, so that product development engineers may easily find the data associated to the work activities in development projects they are familiar with. The second major part of this talk deals with an approach to exploit domain knowledge during data preparation in order to address two of the most important challenging data characteristics in industrial data: a multi-class imbalance and an aggregation bias that is due the high variety of underlying products. This approach is evaluated with a use cases for a data-driven identification of quality issues in assembled truck engines. It is shown that the approach leads to a significant increase of classification accuracy and to a reduction of the number of rework steps needed to repair faulty truck engines.
Peter Reimann studied computer science at the University of Stuttgart and received his PhD in 2016 at the Institute for Parallel and Distributed Systems (IPVS) and at the Cluster of Excellence Simulation Technology (SimTech) of the University of Stuttgart. His PhD topic was related to data management and data provision for computer-based simulations and simulation workflows. From July to September 2015, he was a visiting scholar at the University of Illinois at Urbana-Champaign in USA. Since 2017, he is head of a junior research group at the Graduate School of Excellence advanced Manufacturing Engineering (GSaME) of the University of Stuttgart. His research area covers topics on both application-oriented and fundamental research in the areas of data provision, data management, data analysis, and machine learning for industrial use cases (Industrial Analytics).
|Manuel Fritz: Data Management in Semiconductor Manufacturing Equipment: Current State and Challenges
Semiconductor industry is a key-enabler for technological advancements such as AI and Industry 4.0 by providing ever shrinking transistor sizes, more storage and more computing power for less energy and less costs. The production of semiconductor manufacturing equipment relies on machines which achieve high yield and high output, thus relying on ultra-precise manufacturing processes. In this talk, we look at how a lithography system is produced and unveil the interdependency with data management and analytics. We present data management as an enabler for automated, robust, high-end manufacturing processes, and show how data analytics can support the overarching company goals on a technical and organizational level to achieve competitive benefits, such as increased product output, reduced costs, improved processes, and even longer product lifespan.
Dr. Manuel Fritz works at the Process Data Systems Engineering Group at Zeiss Semiconductor Manufacturing Technology and is a Product Owner for Data Analytics. He studied Computer Science at the Baden-Wuerttemberg Cooperative State University (B.Sc.) and at the University of Furtwangen (M.Sc.). He holds a PhD from the University of Stuttgart in Computer Science and is currently pursuing an MBA at Quantic School of Business and Technology. His research area focuses on Data Analytics, Big Data, Meta Learning, and AutoML.
|Jan Schneider: Lakehouses: Transferring Database Concepts to Data Lakes
In times of digital transformation, enterprises need to store, organize and analyze huge amounts of data in order to exploit it for competitive advantages. Analytical data platforms form the technical foundation for these tasks and in the past, especially two types of them have attracted greater popularity: While data warehouses have traditionally been used by business analysts for reporting and OLAP, data lakes emerged as an alternative concept that also supports advanced analytics, such as data mining and machine learning. Since both types of data platforms show rather contrary characteristics and target different types of analytics, enterprises usually have to employ both of them, which leads to complex and costly architectures. Due to these issues, efforts have currently become apparent to combine the features of data warehouses and data lakes into integrated data platforms, so-called lakehouses, which can serve all types of analytical workloads. The vision of lakehouses raises the need for enhancing data lakes for established features of data warehouses, such as relational data structures, ACID transactions, concurrency control and time travel capabilities. However, this proves to be challenging, because data lakes, which commonly consist of distributed file systems and cloud object storages, tend to fundamentally differ from typical database environments and POSIX-compatible file systems. This presentation provides an overview on the goals of lakehouses, the current state of research, and in particular discusses the challenges that arise when attempting to implement common database features on top of data lakes.