Year:  
All - 2014  ...  2016 2017 2018 2019
← Select a Year 
References
Zeuch, Steffen; Chaudhary, Ankit; Del Monte, Bonaventura; Gavriilidis, Haralampos; Giouroukis, Dimitrios; Grulich, Philipp M.; Breß, Sebastian; Traub, Jonas; Markl, Volker
The NebulaStream Platform: Data and Application Management for the Internet of Things
Conference on Innovative Data System Research 2020
2019
accepted

Abstract: The Internet of Things (IoT) presents a novel computing architecture for data management: a distributed, highly dynamic, and heterogeneous environment of massive scale. Applications for the IoT introduce new challenges for integrating the concepts of fog and cloud computing in one unified environment. In this paper, we highlight these major challenges and showcase how existing systems handle them. Based on these challenges, we introduce the NebulaStream platform, a general purpose, end-to-end data management system for the IoT. NebulaStream addresses the heterogeneity and distribution of compute and data, supports diverse data and programming models going beyond relational algebra, deals with potentially unreliable communication, and enables constant evolution under continuous operation. In our evaluation, we demonstrate the effectiveness of our approach by providing early results on partial aspects.

Wisiol, Nils; Pirnay, Niklas
XOR Arbiter PUFs have Systematic Response Bias
Proceedings of the 24th International Conference on Financial Cryptography and Data Security
2019
accepted
Behnke, Ilja; Thamsen, Lauritz; Kao, Odej
Héctor: A Framework for Testing IoT Applications Across Heterogeneous Edge and Cloud Testbeds
Proceedings of the 12th {IEEE/ACM} International Conference on Utility and Cloud Computing, {UCC} 2019 , page 15 - 20.
December 2019

Abstract: As a result of the many technical advances in microcomputers and mobile connectivity, the Internet of Things (IoT) has been on the rise in the recent decade. Due to the broad spectrum of applications, networks facilitating IoT scenarios can be of very different scale and complexity. Additionally, connected devices are uncommonly heterogeneous, including micro controllers, smartphones, fog nodes and server infrastructures. Therefore, testing IoT applications is difficult, motivating adequate tool support. In this paper, we present Héctor, a framework for the automatic testing of IoT applications. Héctor allows the automated execution of user-defined experiments on agnostic IoT testbeds. To test applications independently of the availability of required devices, the framework is able to generate virtual testbeds with adjustable network properties. Our evaluations show that simple experiments can be easily automated across a broad spectrum of testbeds. However, the results also indicate that there is considerable interference in experiments, in which many devices are emulated, due to the high resource demand of system emulation.

Thamsen, Lauritz; Verbitskiy, Ilya; Nedelkoski, Sasho; Tran, Vinh Thuy; Meyer, Vinicius; Xavier, Miguel G.; Kao, Odej; De Rose, Cesar A. F.
Hugo: A Cluster Scheduler that Efficiently Learns to Select Complementary Data-Parallel Jobs
Euro-Par 2019: Parallel Processing Workshops,
2019
to be published

Abstract: Distributed data processing systems like MapReduce, Spark, and Flink are popular tools for analysis of large datasets with cluster resources. Yet, users often overprovision resources for their data processing jobs, while the resource usage of these jobs also typically fluctuates considerably. Therefore, multiple jobs usually get scheduled onto the same shared resources to increase the resource utilization and throughput of clusters. However, job runtimes and the utilization of shared resources can vary significantly depending on the specific combinations of co-located jobs. This paper presents Hugo, a cluster scheduler that continuously learns how efficiently jobs share resources, considering metrics for the resource utilization and interference among co-located jobs. The scheduler combines offline grouping of jobs with online reinforcement learning to provide a scheduling mechanism that efficiently generalizes from specific monitored job combinations yet also adapts to changes in workloads. Our evaluation of a prototype shows that the approach can reduce the runtimes of exemplary Spark jobs on a YARN cluster by up to 12.5%, while resource utilization is increased and waiting times can be bounded.

Semmler, Niklas; Smaragdakis, Georgios; Feldmann, Anja
Online Replication Strategies for Distributed Data Stores
OJIOT, 5(1):47-57
August 2019
ISSN: 2364-7108

Abstract: The rate at which data is produced at the network edge, e.g., collected from sensors and Internet of Things (IoT) devices, will soon exceed the storage and processing capabilities of a single system and the capacity of the network. Thus, data will need to be collected and preprocessed in distributed data stores - as part of a distributed database - at the network edge. Yet, even in this setup, the transfer of query results will incur prohibitive costs. To further reduce the data transfers, patterns in the workloads must be exploited. Particularly in IoT scenarios, we expect data access to be highly skewed. Most data will be store-only, while a fraction will be popular. Here, the replication of popular, raw data, as opposed to the shipment of partially redundant query results, can reduce the volume of data transfers over the network. In this paper, we design online strategies to decide between replicating data from data stores or forwarding the queries and retrieving their results. Our insight is that by profiling access patterns of the data we can lower the data transfer cost and the corresponding response times. We evaluate the benefit of our strategies using two real-world datasets.

Schwarzenberg, Robert; Harbecke, David; Macketanz, Vivien; Avramidis, Eleftherios; Möller, Sebastian
Train, Sort, Explain: Learning to Diagnose Translation Models
Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, {NAACL-HLT} 2019 , page 29 - 34.
2019

Abstract: Evaluating translation models is a trade-off between effort and detail. On the one end of the spectrum there are automatic count-based methods such as BLEU, on the other end linguistic evaluations by humans, which arguably are more informative but also require a disproportionately high effort. To narrow the spectrum, we propose a general approach on how to automatically expose systematic differences between human and machine translations to human experts. Inspired by adversarial settings, we train a neural text classifier to distinguish human from machine translations. A classifier that performs and generalizes well after training should recognize systematic differences between the two classes, which we uncover with neural explainability methods. Our proof-of-concept implementation, DiaMaT, is open source. Applied to a dataset translated by a state-of-the-art neural Transformer model, DiaMaT achieves a classification accuracy of 75% and exposes meaningful differences between humans and the Transformer, amidst the current discussion about human parity.

Salem, Farouk; Schütt, Thorsten; Schintke, Florian; Reinefeld, Alexander
Scheduling Data Streams for Low Latency and High Throughput on a Cray XC40 Using Libfabric
CUG Conference Proceedings
2019

Abstract: Achieving efficient many-to-many communication on a given network topology is a challenging task when many data streams from different sources have to be scattered concurrently to many destinations with low variance in arrival times. In such scenarios, it is critical to saturate but not to congest the bisectional bandwidth of the network topology in order to achieve a good aggregate throughput. When there are many concurrent point-to-point connections, the communication pattern needs to be dynamically scheduled in a fine-grained manner to avoid network congestion (links, switches), overload in the node's in coming links, and receive buffer overflow. Motivated by the use case of the Compressed Baryonic Matter experiment (CBM), we study the performance and variance of such communication patterns on a Cray XC40 with different routing schemes and scheduling approaches. We present a distributed Data Flow Scheduler (DFS) that reduces the variance of arrival times from all sources at least 30 times and increases the achieved aggregate band width by up to 50%.

Quiring, Erwin; Maier, Alwin; Rieck, Konrad
Misleading Authorship Attribution of Source Code using Adversarial Learning
28th {USENIX} Security Symposium , page 479 - 496.
2019
Derakhshan, Behrouz; Rezaei Mahdiraji, Alireza; Rabl, Tilmann; Markl, Volker
Continuous Deployment of Machine Learning Pipelines
International Conference on Extending Database Technology. International Conference on Extending Database Technology (EDBT-2019), March 25-29, Lisbon, Portugal
Publisher: OpenProceedings,
2019
ISBN: 978-3-89318-081-3
Alt, Christoph; Hübner, Marc; Hennig, Leonhard
Improving Relation Extraction by Pre-trained Language Representations
Proceedings of AKBC 2019. Automated Knowledge Base Construction (AKBC-2019), May 20-22, Amherst, Massachusetts, United States , page 1--18.
Publisher: OpenReview,
2019
Traub, Jonas; Grulich, Philipp; Cuéllar, Alejandro Rodríguez; Breß, Sebastian; Katsifodimos, Asterios; Rabl, Tilmann; Markl, Volker
Efficient Window Aggregation with General Stream Slicing
22th International Conference on Extending Database Technology (EDBT). International Conference on Extending Database Technology (EDBT-2019), 22th, March 26-29, Lisbon, Portugal
Publisher: OpenProceedings,
2019
Zeuch, Steffen; Del Monte, Bonaventura; Karimov, Jeyhun; Lutz, Clemens; Renz, Manuel; Traub, Jonas; Breß, Sebastian; Rabl, Tilmann; Markl, Volker
Analyzing Efficient Stream Processing on Modern Hardware
Proceedings of the VLDB Endowment (PVLDB), 12(5):516--530
2019
Awad, Ahmed; Traub, Jonas; Sakr, Sherif
Adaptive Watermarks: A Concept Drift-based Approach for Predicting Event-Time Progress in Data Streams
21st International Conference on Extending Database Technology (EDBT). International Conference on Extending Database Technology (EDBT-2018), 21st, March 26-29, Vienna, Austria
Publisher: OpenProceedings,
2019
Zhao, Guoguang; Zhao, Jianyu; Li, Yang; Alt, Christoph; Schwarzenberg, Robert; Hennig, Leonhard; Schaffer, Stefan; Schmeier, Sven; Hu, Changjian; Xu, Feiyu
MOLI: Smart Conversation Agent for Mobile Customer Service
Information, 10(2):1--14
February 2019
Ganji, Fatameh; Tajik, Shahin; Sauss, Pascal; Seifert, Jean-Pierre; Forte, Domenic; Tehranipoor, Mark
Rock'n'roll PUFs: Crafting Provably Secure PUFs from Less Secure Ones
In Karine Heydemann, Ulrich Kühne and Letitia Li, editor, Proceedings of 8th International Workshop on Security Proofs for Embedded Systems Volume 11 , page 33 - 48.
September 2019
to be published

Abstract: The era of PUFs has been characterized by the efforts put into research and the devel- opment of PUFs that are resilient against attacks, in particular, machine learning (ML) attacks. Due to the lack of systematic and provable methods for this purpose, we have witnessed the ever-continuing competition between PUF designers/ manufacturers, crypt- analysts, and of course, adversaries that maliciously break the security of PUFs. This is despite a series of acknowledged principles developed in cryptography and complexity theory, under the umbrella term “hardness amplification”. This paper aims at narrowing the gap between these studies and hardware security, specifically for applications in the domain of PUFs. To this end, this paper provides an example of somewhat hard PUFs and demonstrates how to build a strongly secure construction out of these considerably weaker primitives. Our theoretical findings are discussed in an exhaustive manner and supported by the silicon results captured from real-world PUFs.

Poularakis, Konstantinos; Iosifidis, George; Smaragdakis, Georgios; Tassiulas, Leandros
Optimizing Gradual SDN Upgrades in ISP Networks
IEEE/ACM Transactions on Networking, 27(1):288 - 301
September 2019

Abstract: Nowadays, there is a fast-paced shift from legacy telecommunication systems to novel software-defined network (SDN) architectures that can support on-the-fly network reconfiguration, therefore, empowering advanced traffic engineering mechanisms. Despite this momentum, migration to SDN cannot be realized at once especially in high-end networks of Internet service providers (ISPs). It is expected that ISPs will gradually upgrade their networks to SDN over a period that spans several years. In this paper, we study the SDN upgrading problem in an ISP network: which nodes to upgrade and when we consider a general model that captures different migration costs and network topologies, and two plausible ISP objectives: 1) the maximization of the traffic that traverses at least one SDN node, and 2) the maximization of the number of dynamically selectable routing paths enabled by SDN nodes. We leverage the theory of submodular and supermodular functions to devise algorithms with provable approximation ratios for each objective. Using real-world network topologies and traffic matrices, we evaluate the performance of our algorithms and show up to 54% gains over state-of-the-art methods. Moreover, we describe the interplay between the two objectives; maximizing one may cause a factor of 2 loss to the other. We also study the dual upgrading problem, i.e., minimizing the upgrading cost for the ISP while ensuring specific performance goals. Our analysis shows that our proposed algorithm can achieve up to 2.5 times lower cost to ensure performance goals over state-of-the-art methods.

Skrzypczak, Jan; Schintke, Florian; Schütt, Thorsten
Linearizable State Machine Replication of State-Based CRDTs without Logs.
In Peter Robinson and Faith Ellen, editor, Proceedings of the 2019 (ACM) Symposium on Principles of Distributed Computing, (PODC) , page 455 - 457.
2019
ISBN: 978-1-4503-6217-7

Abstract: General solutions of state machine replication have to ensure that all replicas apply the same commands in the same order, even in the presence of failures. Such strict ordering incurs high synchronization costs due to the use of distributed consensus or a leader. This paper presents a protocol for linearizable state machine replication of conflict-free replicated data types (CRDTs) that neither requires consensus nor a leader. By leveraging the properties of state-based CRDTs in particular the monotonic growth of a join semilattice synchronization overhead is greatly reduced. In addition, updates just need a single round trip and modify the state 'in-place' without the need for a log. Furthermore, the message size overhead for coordination consists of a single counter per message. While reads in the presence of concurrent updates are not wait-free without a coordinator, we show that more than 97\,% of reads can be handled in one or two round trips under highly concurrent accesses. Our protocol achieves high throughput without auxiliary processes such as command log management or leader election. It is well suited for all practical scenarios that need linearizable access on CRDT data on a fine-granular scale.

Nedelkoski, Sasho; Thamsen, Lauritz; Verbitskiy, Ilya; Kao, Odej
Multilayer Active Learning for Efficient Learning and Resource Usage in Distributed IoT Architectures
2019 IEEE International Conference on Edge Computing (EDGE) , page 8 - 12.
2019

Abstract: The use of machine learning modeling techniques enables smart IoT applications in geo-distributed infrastructures such as in the areas of Industry 4.0, smart cities, autonomous driving, and telemedicine. The data for these models is continuously emitted by sensor-equipped devices. It is usually unlabeled and commonly has dynamically-changing data distribution, which impedes the learning process. However, many critical applications such as telemedicine require highly accurate models and human supervision. Therefore, online supervised learning is often utilized, but its application remains challenging as it requires continuous labeling by experts, which is expensive. To reduce the cost, active learning (AL) strategies are used for efficient data selection and labeling. In this paper we propose a novel AL framework for IoT applications, which employs data selection strategies throughout the multiple layers of distributed IoT architectures. This enables an improved utilization of the available resources and reduces costs. The results from the evaluation using classification and regression tasks and synthetic as well as real-world datasets in multiple settings show that the use of multilayer AL can significantly reduce communication, expert costs, and energy, without a loss in model performance. We believe that this study motivates the development of new techniques that employ selective sampling strategies on data streams to optimize the resource usage in IoT architectures.

Mattes, Dirk
Ein System zur deterministischen Wiedergabe von verteilten Algorithmen auf Anwendungsebene.
Humboldt-Universität zu Berlin,
2019
Mohammad Mahdavi, Ziawasch Abedjan, Raul Castro Fernandez, Samuel Madden, Mourad Ouzzani, Michael Stonebraker,; Tang, Nan
Raha: A Configuration-Free Error Detection System
SIGMOD
2019
Iskender, Neslihan; Gabryszak, Aleksandra; Polzehl, Tim; Hennig, Leonhard; Möller, Sebastian
A Crowdsourcing Approach to Evaluate the Quality of Query-based Extractive Text Summaries
11th International Conference on Quality of Multimedia Experience QoMEX 2019 , page 1 - 3.
2019

Abstract: High cost and time consumption are concurrent barriers for research and application of automated summarization. In order to explore options to overcome this barrier, we analyze the feasibility and appropriateness of micro-task crowdsourcing for evaluation of different summary quality characteristics and report an ongoing work on the crowdsourced evaluation of query-based extractive text summaries. To do so, we assess and evaluate a number of linguistic quality factors such as grammaticality, non-redundancy, referential clarity, focus and structure & coherence. Our first results imply that referential clarity, focus and structure & coherence are the main factors effecting the perceived summary quality by crowdworkers. Further, we compare these results using an initial set of expert annotations that is currently being collected, as well as an initial set of automatic quality score ROUGE for summary evaluation. Preliminary results show that ROUGE does not correlate with linguistic quality factors, regardless if assessed by crowd or experts.Further, crowd and expert ratings show highest degree of correlation when assessing low quality summaries. Assessments increasingly divert when attributing high quality judgments.

Semmler, Niklas; Smaragdakis, Georgios; Feldmann, Anja
Distributed Mega-Datasets: The Need for Novel Computing Primitives
39th {IEEE} International Conference on Distributed Computing Systems
November 2019

Abstract: With the ongoing digitalization, an increasing number of sensors is becoming part of our digital infrastructure. These sensors produce highly, even globally, distributed data streams. The aggregate data rate of these streams far exceeds local storage and computing capabilities. Yet, for radical new services (e.g., predictive maintenance and autonomous driving), which depend on various control loops, this data needs to be analyzed in a timely fashion. In this position paper, we outline a system architecture that can effectively handle distributed mega-datasets using data aggregation. Hereby, we point out two research challenges: The need for (1) novel computing primitives that allow us to aggregate data at scale across multiple hierarchies (i.e., time and location) while answering a multitude of a priori unknown queries, and (2) transfer optimizations that enable rapid local and global decision making.

Wisiol, Nils; T. Becker, Georg; Margraf, Marian; Soroceanu, Tudor A. A.; Tobisch, Johannes; Zengin, Benjamin
Breaking the Lightweight Secure {PUF:} Understanding the Relation of Input Transformations and Machine Learning Resistance.
{IACR} Cryptology ePrint Archive,
2019

Abstract: Physical Unclonable Functions (PUFs) and, in particular, XOR Arbiter PUFs have gained much research interest as an authentication mechanism for embedded systems. One of the biggest problems of (strong) PUFs is their vulnerability to so called machine learning attacks. In this paper we take a closer look at one aspect of machine learning attacks that has not yet gained the needed attention: the generation of the sub-challenges in XOR Arbiter PUFs fed to the individual Arbiter PUFs. Specifically, we look at one of the most popular ways to generate sub-challenges based on a combination of permutations and XORs as it has been described for the "Lightweight Secure PUF". Previous research suggested that using such a sub-challenge generation increases the machine learning resistance significantly. Our contribution in the field of sub-challenge generation is three-fold: First, drastically improving attack results by Rührmair et al., we describe a novel attack that can break the Lightweight Secure PUF in time roughly equivalent to an XOR Arbiter PUF without transformation of the challenge input. Second, we give a mathematical model that gives insight into the weakness of the Lightweight Secure PUF and provides a way to study generation of sub-challenges in general. Third, we propose a new, efficient, and cost-effective way for sub-challenge generation that mitigates the attack strategy we used and outperforms the Lightweight Secure PUF in both machine learning resistance and resource overhead.

Hartung, Marc; Schintke, Florian; Schütt, Thorsten
Pinpoint Data Races via Testing and Classification
2019 IEEE International Symposium on Software Reliability Engineering Workshops (ISSREW); 3rd International Workshop on Software Faults (IWSF 2019)
2019
Gholami, Masoud; Schintke, Florian
Multilevel Checkpoint/Restart for Large Computational Jobs on Distributed Computing Resources
2019 IEEE 38th Symposium on Reliable Distributed Systems (SRDS)
2019
accepted
Geldenhuys, Morgan K.; Thamsen, Lauritz; Gontarska, Kain Kordian; Lorenz, Felix; Kao, Odej
Effectively Testing System Configurations of Critical IoT Analytics Pipelines
2019 IEEE International Conference on Big Data
Publisher: IEEE,
December 2019
to be published
Abedjan, Ziawasch
Data Profiling
Encyclopedia of Big Data Technologies.
2019
Çakal, Öykü Özlem; Mahdavi, Mohammad; Abedjan, Ziawasch
CLRL: Feature Engineering for Cross-Language Record Linkage
EDBT , page 678--681.
2019
Esmailoghli, Mahdi; Redyuk, Sergey; Martinez, Ricardo; Abedjan, Ziawasch; Rabl, Tilmann; Markl, Volker
Explanation of Air Pollution Using External Data Sources
BTW , page 297--300.
2019
Abedjan, Ziawasch; Boujemaa, Nozha; Campbell, Stuart; Casla, Patricia; Chatterjea, Supriyo; Consoli, Sergio; Costa Soria, Cristobal; Czech, Paul; Despenic, Marija; Garattini, Chiara; Hamelinck, Dirk; Heinrich, Adrienne; Kraaij, Wessel; Kustra, Jacek; Lojo, Aizea; Martin Sanchez, Marga; Angel Mayer, Miguel; Melideo, Matteo; Menasalvas, Ernestina; Moller Aarestrup, Frank; Narro Artigot, Elvira; Petkovic, Milan; Reforgiato Recupero, Diego; Rodriguez Gonzalez, Alejandro; Roesems Kerremans, Gisele; Roller, Roland; Romao, Mario; Ruping, Stefan; Sasaki, Felix; Spek, Wouter; Stojanovic, Nenad; Thoms, Jack; Vasiljevs, Andrejs; Verachtert, Wilfried; Wuyts, Roel
Data Science in Healthcare: Benefits, Challenges and Opportunities
Data Science for Healthcare - Methodologies and Applications
page 3--38.
2019
Warnecke, Alexander; Arp, Daniel; Wressnegger, Christian; Rieck, Konrad
Don't Paint It Black: White-Box Explanations for Deep Learning in Computer Security
CoRR, abs/1906.02108
2019

Abstract: Deep learning is increasingly used as a basic building block of security systems. Unfortunately, deep neural networks are hard to interpret, and their decision process is opaque to the practitioner. Recent work has started to address this problem by considering black-box explanations for deep learning in computer security (CCS'18). The underlying explanation methods, however, ignore the structure of neural networks and thus omit crucial information for analyzing the decision process. In this paper, we investigate white-box explanations and systematically compare them with current black-box approaches. In an extensive evaluation with learning-based systems for malware detection and vulnerability discovery, we demonstrate that white-box explanations are more concise, sparse, complete and efficient than black-box approaches. As a consequence, we generally recommend the use of white-box explanations if access to the employed neural network is available, which usually is the case for stand-alone systems for malware detection, binary analysis, and vulnerability discovery.

Alonso, Gustavo; Binnig, Carsten; Pandis, Ippokratis; Salem, Kenneth; Skrzypczak, Jan; Stutsman, Ryan; Thostrup, Lasse; Wang, Tianzheng; Wang, Zeke; Ziegler, Tobias
DPI: The Data Processing Interface for Modern Networks
{CIDR} 2019, 9th Biennial Conference on Innovative Data Systems Research 2019, Online Proceedings
2019

Abstract: As data processing evolves towards large scale, distributed plat-forms, the network will necessarily play a substantial role in achieving efficiency and performance. Increasingly, switches, network cards, and protocols are becoming more flexible while programmability at all levels (aka, software defined networks) opens up many possibilities to tailor the network to data processing applications and to push processing down to the network elements. In this paper, we propose DPI, an interface providing a set of simple yet powerful abstractions flexible enough to exploit features of modern networks (e.g., RDMA or in-network processing) suit-able for data processing. Mirroring the concept behind the Message Passing Interface (MPI) used extensively in high-performance computing, DPI is an interface definition rather than an implementation so as to be able to bridge different networking technologies and to evolve with them. In the paper we motivate and discuss key primitives of the interface and present a number of use cases that show the potential of DPI for data-intensive applications, such as analytic engines and distributed database systems

Chmiela, S; Sauceda, HE; Poltavsky, I; Müller, KR; Tkatchenko, A
sGDML: Constructing accurate and data efficient molecular force fields using machine learning
Computer Physics Communications, 240:38--45
2019

Abstract: We present an optimized implementation of the recently proposed symmetric gradient domain machine learning (sGDML) model. The sGDML model is able to faithfully reproduce global potential energy surfaces (PES) for molecules with a few dozen atoms from a limited number of user-provided reference molecular conformations and the associated atomic forces. Here, we introduce a Python software package to reconstruct and evaluate custom sGDML force fields (FFs), without requiring in-depth knowledge about the details of the model. A user-friendly command-line interface offers assistance through the complete process of model creation, in an effort to make this novel machine learning approach accessible to broad practitioners. Our paper serves as a documentation, but also includes a practical application example of how to reconstruct and use a PBE0+MBD FF for paracetamol. Finally, we show how to interface sGDML with the FF simulation engines ASE (Larsen et al., 2017) and i-PI (Kapil et al., 2019) to run numerical experiments, including structure optimization, classical and path integral molecular dynamics and nudged elastic band calculations.

Salem, Farouk; Schintke, Florian; Schütt, Thorsten; Reinefeld, Alexander
Scheduling data streams for low latency and high throughput on a Cray XC40 using Libfabric
Concurrency and Computation Practice and Experience, :1 - 14
2019

Abstract: Achieving efficient many-to-many communication on a given network topology is a challenging task when many data streams from different sources have to be scattered concurrently to many destinations with low variance in arrival times. In such scenarios, it is critical to saturate but not to congest the bisectional bandwidth of the network topology in order to achieve a good aggregate throughput. When there are many concurrent point-to-point connections, the communication pattern needs to be dynamically scheduled in a fine-grained manner to avoid network congestion (links, switches), overload in the node's in coming links, and receive buffer overflow. Motivated by the use case of the Compressed Baryonic Matter experiment (CBM), we study the performance and variance of such communication patterns on a Cray XC40 with different routing schemes and scheduling approaches. We present a distributed Data Flow Scheduler (DFS) that reduces the variance of arrival times from all sources at least 30 times and increases the achieved aggregate band width by up to 50%.

Arras, A; Osman, A; Müller, Klaus-Robert; Samek, Wojciech
Evaluating Recurrent Neural Network Explanations
CoRR, abs/1904.11829
2019

Abstract: Recently, several methods have been proposed to explain the predictions of recurrent neural networks (RNNs), in particular of LSTMs. The goal of these methods is to understand the network's decisions by assigning to each input variable, e.g., a word, a relevance indicating to which extent it contributed to a particular prediction. In previous works, some of these methods were not yet compared to one another, or were evaluated only qualitatively. We close this gap by systematically and quantitatively comparing these methods in different settings, namely (1) a toy arithmetic task which we use as a sanity check, (2) a five-class sentiment prediction of movie reviews, and besides (3) we explore the usefulness of word relevances to build sentence-level representations. Lastly, using the method that performed best in our experiments, we show how specific linguistic phenomena such as the negation in sentiment analysis reflect in terms of relevance patterns, and how the relevance visualization can help to understand the misclassification of individual samples.

Alt, Christoph; Hübner, Marc; Hennig, Leonhard
Fine-tuning Pre-Trained Transformer Language Models to Distantly Supervised Relation Extraction
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics Volume 1 , page 1388--1398.
Publisher: Association for Computational Linguistics,
2019

Abstract: Distantly supervised relation extraction is widely used to extract relational facts from text, but suffers from noisy labels. Current relation extraction methods try to alleviate the noise by multi-instance learning and by providing supporting linguistic and contextual information to more efficiently guide the relation classification. While achieving state-of-the-art results, we observed these models to be biased towards recognizing a limited set of relations with high precision, while ignoring those in the long tail. To address this gap, we utilize a pre-trained language model, the OpenAI Generative Pre-trained Transformer (GPT) (Radford et al., 2018). The GPT and similar models have been shown to capture semantic and syntactic features, and also a notable amount of “common-sense” knowledge, which we hypothesize are important features for recognizing a more diverse set of relations. By extending the GPT to the distantly supervised setting, and fine-tuning it on the NYT10 dataset, we show that it predicts a larger set of distinct relation types with high confidence. Manual and automated evaluation of our model shows that it achieves a state-of-the-art AUC score of 0.422 on the NYT10 dataset, and performs especially well at higher recall levels.

Alber, Maximilian; Lapuschkin, Sebastian; Seegerer, Philipp; Hägele, Miriam; Schütt, Kristof T.; Montavon, Grégoire; Samek, Wojciech; Müller, Klaus-Robert; Dähne, Sven; Kindermans, Pieter-Jan
iNNvestigate neural networks!
J. Mach. Learn. Res., 20:93:1--93:8
2019

Abstract: In recent years, deep neural networks have revolutionized many application domains of machine learning and are key components of many critical decision or predictive processes. Therefore, it is crucial that domain specialists can understand and analyze actions and pre- dictions, even of the most complex neural network architectures. Despite these arguments neural networks are often treated as black boxes. In the attempt to alleviate this short- coming many analysis methods were proposed, yet the lack of reference implementations often makes a systematic comparison between the methods a major effort. The presented library iNNvestigate addresses this by providing a common interface and out-of-the- box implementation for many analysis methods, including the reference implementation for PatternNet and PatternAttribution as well as for LRP-methods. To demonstrate the versatility of iNNvestigate, we provide an analysis of image classifications for variety of state-of-the-art neural network architectures.

Ganji, Fatemeh; Forte, Domenic; Seifert, Jean-Pierre
PUFmeter a Property Testing Tool for Assessing the Robustness of Physically Unclonable Functions to Machine Learning Attacks
IEEE Access, 7:122513 - 122521
August 2019
ISSN: 2169-3536

Abstract: As PUFs become ubiquitous for commercial products (e.g., FPGAs from Xilinx, Altera, and Microsemi), attacks against these primitives are evolving toward more omnipresent and even advanced techniques. Machine learning (ML) attacks, among other non-invasive attacks, are proven to be feasible and cost-effective in the real-world. However, for PUF designers, it still remains an open question whether their countermeasures, or even new designs, are resistant to these types of attacks. Although standard metrics for estimating PUF quality exist, the most common approaches for measuring resistance to ML attacks are empirical. This paper introduces PUFmeter, a new publicly available toolbox consisting of in-house developed algorithms, to provide a firm basis for the robustness assessment of PUFs against ML attacks. To this end, new metrics and notions are reintroduced by PUFmeter to PUF designers and manufacturers. Furthermore, to prepare the PUF input-output pairs adequately before conducting any analysis, PUFmeter involves modules that output the minimum number of measurement repetitions and the upper bound on the noise level affecting the PUF responses.