The second AlgoUK workshop will take place on 19-20 September 2018 at the University of Liverpool. The theme of the first day’s talks will be Algorithms and Network Science and Technologies, featuring speakers from both industry and academia. The talks on the second day will be within a broader area of algorithms and complexity.
The talks will take place in the Sensor City Building (which is walking distance from the Department of Computer Science, University of Liverpool). The schedule and speaker details are as follows.
|19 September 2018
||20 September 2018
|13:30 – 13:40
||Introduction to AlgoUK
||9:30 – 10:10
|13:40 – 14:20
||10:10 – 10:50
|14:20 – 15:00
||10:50 – 11:20
|15:00 – 15:30
||11:20 – 12:00
|15:30 – 16:10
||12:00 – 12:40
|16:10 – 16:50
||12:40 – 13:20
|16:50 – 17:30
||13:20 – 15:00
We will aim to go to dinner on the first day at around 18:30; but recall that attendees are responsible for organising and paying for their own dinners. However, 30 places have been reserved at Bem Brasil Restaurant for those interested (more details will be supplied on the day). Also, note that lunch will be provided after the close of the second day for those who wish to partake.
19 September 2018
- Edward Pyzer-Knapp (IBM Research, Hartree Centre): Bayesian Optimization – a broad and powerful family of algorithms
- Abstract: Bayesian optimization is a black box optimization technique, which is typically used when the data is extremely expensive (cost or time) to collect. Due to its black box nature, it does not require analytic access to the closed form of the function you are wishing to optimize, which opens the door to optimizing a wide variety of interesting properties. In this talk I will outline the nature of the Bayesian optimization algorithm, show some examples of its use in an industrial setting, and finally address the question ‘if this is so great, why isn’t ALL optimization Bayesian?’.
- Thomas Gorry (Quanovo Data Technologies): An introduction to deep learning and its future development
- Abstract: This talk will be concerned with what AI is and what it is not in the context of Deep Learning and will discuss with examples of how Artificial Neural Networks are used to achieve this type of Machine Learning. There will be illustrations given of real world uses in industry where Deep Learning has been deployed and is already being used. Finally the talk will focus on some of the technical challenges, in the form of open problems, that are preventing widespread adoption of AI across industry as well as some promising avenues that are being explored on route to solutions.
- Cliff Brereton (OCF DATA Limited): The rise in the use of data and algorithms in traditional industry workflows
- Abstract: The use of data and algorithms to extract insights from that data has grown significantly over the past decade. Part of the reason for this is the amount of data generated that can be analysed, but more especially organisations are looking at ‘real-world’ opportunities and challenges through the lens of data – how to improve/maximise outcomes using the analysis of data across a diverse range of scenarios. From healthcare to manufacturing, from managing power grids to student education. OCF-DATA are a workflow and performance improvement organisation that uses data and analytics tools and methods to deliver point solutions and services for NHS, Universities, Manufacturers and utility companies. This talk will look at the methods used to deploy data analytics solutions in to these traditional workflows and processes as well as examining some specific case examples from different industries.
- Alan Marshall (Department of Electrical Engineering and Electronics, University of Liverpool): A trust mechanism for distributed cooperative spectrum sensing for cognitive radio ad hoc networks
- Abstract: The current shortage in radio spectrum originated from the static assignment of the frequency bands to the primary users (PUs or licensees) of the bands. Dynamic spectrum access, enabled by cognitive radios, introduces an adaptive approach for spectrum use that facilitates more flexibility by allowing secondary users (SUs) to use licensed spectrum bands on an opportunistic non-interference basis. Cooperation among cognitive radios for spectrum sensing is deemed essential for these environments. Distributed (decentralized) cooperative spectrum sensing (DCSS) eliminates the need for a fusion center or infrastructure for performing the cooperation. Therefore, unlike the centralized approach, it scales well and does not have the disadvantage of a single point of failure. Existing DCSS schemes are based on distributed average consensus algorithms where each node holds a measured received power and the goal is to compute the average via local interactions with neighboring nodes. However such schemes are vulnerable to vulnerable to spectrum sensing data falsification attacks. A new distributed trust management scheme that is integrated into cooperation is proposed in order to mitigate different types of data falsification attacks. This is evaluated in practical scenarios where the primary user and secondary users of the spectrum, and the malicious users, behave dynamically.
- Christian Setzkorn (Shop Direct, Liverpool): Techniques and algorithms for product pricing
- Abstract: Product pricing is the lever that has the highest and quickest impact on profit. It is also used to control revenue, price perception and market share. This talk introduces pricing concepts, techniques and algorithms being used in practice and challenges faced. Concepts discussed include: elasticity, halo, cannibalism and pull forward effects. We give a high level overview of basic statistical techniques used, such as, for example, log log and two way interaction regression. We also discuss techniques to determine interacting/similar products and the use of multi-objective evolutionary algorithms to optimise revenue, margin and competitor price distances.
20 September 2018
- Sayan Bhattacharya (Department of Computer Science, University of Warwick): Dynamic algorithms for graph coloring
- Abstract: We show how to design fast dynamic algorithms for proper vertex and edge colorings in a graph undergoing edge insertions and deletions. In the static setting, there are simple linear time algorithms for (Δ+1)-vertex coloring and (2Δ-1)-edge coloring in a graph with maximum degree Δ. It is natural to ask if we can efficiently maintain such colorings in the dynamic setting as well. We get the following three results. (1) We present a randomized algorithm which maintains a (Δ+1)-vertex coloring with O(log Δ)$ expected amortized update time. (2) We present a deterministic algorithm which maintains a (1+o(1))Δ-vertex coloring with O(polylog Δ) amortized update time. (3) We present a simple, deterministic algorithm which maintains a (2Δ-1)-edge coloring with O(log Δ) worst-case update time. (Joint work with Deeparnab Chakrabarty, Monika Henzinger and Danupon Nanongkai.)
- Paul Duetting (Department of Mathematics, London School of Economics): Prophet inequalities made easy: stochastic optimization by pricing non-stochastic inputs
- Abstract: We present a general framework for stochastic online maximization problems with combinatorial feasibility constraints. The framework establishes prophet inequalities by constructing price-based online approximation algorithms, a natural extension of threshold algorithms for settings beyond binary selection. Our analysis takes the form of an extension theorem: we derive sufficient conditions on prices when all weights are known in advance, then prove that the resulting approximation guarantees extend directly to stochastic settings. Our framework unifies and simplifies much of the existing literature on prophet inequalities and posted price mechanisms, and is used to derive new and improved results for combinatorial markets (with and without complements), multi-dimensional matroids, and sparse packing problems. Finally, we highlight a surprising connection between the smoothness framework for bounding the price of anarchy of mechanisms and our framework, and show that many smooth mechanisms can be recast as posted price mechanisms with comparable performance guarantees. This is joint work with Michal Feldman (Tel Aviv University), Thomas Kesselheim (University of Bonn) and Brendan Lucier (Microsoft Research).
- Gregory Gutin (Department of Computer Science, Royal Holloway, University of London): Algorithms for out-branchings in digraphs
- We will overview (mainly parameterized) algorithms for out-branchings in digraphs with extremal number of leaves. In particular, we will discuss an O∗(3.86k)-time and polynomial space deterministic algorithm for deciding whether a digraph has an out-branching with at least k internal vertices (Gutin, Reidl, Wahlstrom and Zehavi, JCSS 95(1), 2018).
- George Mertzios (Department of Computer Science, Durham University): Temporal vertex covers and sliding time windows
- Abstract: Modern, inherently dynamic systems are usually characterized by a network structure, i.e. an underlying graph topology, which is subject to discrete changes over time. Given a static underlying graph G, a temporal graph can be represented via an assignment of a set of integer time-labels to every edge of G, indicating the discrete time steps when this edge is active. While most of the recent theoretical research on temporal graphs has focused on the notion of a temporal path and other “path-related” temporal notions, only few attempts have been made to investigate “non-path” temporal graph problems. In this paper, motivated by applications in sensor and in transportation networks, we introduce and study two natural temporal extensions of the classical problem Vertex Cover. In our first problem, Temporal Vertex Cover, the aim is to cover every edge at least once during the lifetime of the temporal graph, where an edge can only be covered by one of its endpoints at a time step when it is active. In our second, more pragmatic variation Sliding Window Temporal Vertex Cover, we are also given a natural number Δ, and our aim is to cover every edge at least once at every Δ consecutive time steps. In both cases we wish to minimize the total number of “vertex appearances” that are needed to cover the whole graph. We present a thorough investigation of the computational complexity and approximability of these two temporal covering problems. In particular, we provide strong hardness results, complemented by various approximation and exact algorithms. Some of our algorithms are polynomial-time, while others are asymptotically almost optimal under the Exponential Time Hypothesis (ETH) and other plausible complexity assumptions.
- Maria Serna (Computer Science Department, Universitat Politècnica de Catalunya, Barcelona): Problems and complexity in influence and social, simple games
- Abstract: Simple games are used to formulate the situations in which the actors or players have to decide about one alternative. Briefly speaking, a simple game is determined by the monotone family of the winning coalitions, i.e., those sets of actors that can force a “yes” decision. We survey first the basic definitions, parameters and properties of simple games and the computational complexity of the associated computational problems. We further analyse the role of a social network in this setting. First, we introduce influence games, where decision is taken on the basis of the number of participants that can be influenced by a coalition under the linear threshold model for influence spread. Second, we consider that players are related through a social network represented by a graph. Under the hypothesis that a coalition to be formed requires some communication among its members, it seems natural to restrict coalitions by requiring that the subgraph induced by the coalition is connected. This leads us to the definition of social, simple games. Finally we analyse the computational complexity of the proposed problems in the new families of games. This is joint work with Xavier Molinero, Fabián Riquelme, Eline van Hove and Jordi Castellví.
The organising committee: