The document proposes Env2Vec, a deep learning approach to accelerate virtual network function (VNF) testing. Env2Vec learns a universal resource characterization model to detect deviations between inferred and actual resource usage, flagging anomalies. It was evaluated on over 400,000 data points from 600+ real-world testing environments and 125 builds, detecting defects automatically with 86.2-100% accuracy while reducing false alarms. Env2Vec reuses learned environment embeddings to detect problems in previously unseen environments with better performance than other approaches.
Enhancing Failure Propagation Analysis in Cloud Computing Systems - ISSRE 201...Pietro Liguori
Slide presentation of the paper "Enhancing Failure Propagation Analysis in Cloud Computing Systems", presented at the conference International Symposium on Software Reliability Engineering (ISSRE), Berlin, October 2019.
What If Solar String Monitoring Was An Affordable, Temporary Solution?Affinity Energy
Historically, string monitoring is too expensive. But what if you had a temporary solution that analyzed PV site data a few times a year at a fraction of the cost? Ultimately, you'd have the data you needed to find issues behind underperformance. Learn more at http://www.affinityenergy.com/freelance-string-monitoring-case-study/
Cloud Failure Prediction with Hierarchical Temporal Memory An Empirical Asses...Oliviero Riganelli
Hierarchical Temporal Memory (HTM) is an unsupervised learning algorithm inspired by the features of the neocortex that can be used to continuously process stream data and detect anomalies, without requiring a large amount of data for training nor requiring labeled data. HTM is also able to continuously learn from samples, providing a model that is always up-to-date with respect to observations. These characteristics make HTM particularly suitable for supporting online failure prediction in cloud systems, which are systems with a dynamically changing behavior that must be monitored to anticipate problems. This paper presents the first systematic study that assesses HTM in the context of failure prediction. The results that we obtained considering 72 configurations of HTM applied to 12 different types of faults introduced in the Clearwater cloud system show that HTM can help to predict failures with sufficient effectiveness (F-measure = 0.76), representing an interesting practical alternative to (semi-)supervised algorithms.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2021/08/case-study-facial-detection-and-recognition-for-always-on-applications-a-presentation-from-synopsys/
Jamie Campbell, Product Marketing Manager for Embedded Vision IP at Synopsys, presents the “Case Study: Facial Detection and Recognition for Always-On Applications” tutorial at the May 2021 Embedded Vision Summit.
Although there are many applications for low-power facial recognition in edge devices, perhaps the most challenging to design are always-on, battery-powered systems that use facial recognition for access control. Laptop, tablet and cellphone users expect hands-free and instantaneous facial recognition. This means the electronics must be always on, constantly looking to detect a face, and then ready to pull from a data set to recognize the face.
This presentation describes the challenges of moving traditional facial detection neural networks to the edge. It explores a case study of a face recognition access control application requiring continuous operation and extreme energy efficiency. Finally, it describes how the combination of Synopsys DesignWare ARC EM and EV processors provides low-power, efficient DSP and CNN acceleration for this application.
Enhancing Failure Propagation Analysis in Cloud Computing Systems - ISSRE 201...Pietro Liguori
Slide presentation of the paper "Enhancing Failure Propagation Analysis in Cloud Computing Systems", presented at the conference International Symposium on Software Reliability Engineering (ISSRE), Berlin, October 2019.
What If Solar String Monitoring Was An Affordable, Temporary Solution?Affinity Energy
Historically, string monitoring is too expensive. But what if you had a temporary solution that analyzed PV site data a few times a year at a fraction of the cost? Ultimately, you'd have the data you needed to find issues behind underperformance. Learn more at http://www.affinityenergy.com/freelance-string-monitoring-case-study/
Cloud Failure Prediction with Hierarchical Temporal Memory An Empirical Asses...Oliviero Riganelli
Hierarchical Temporal Memory (HTM) is an unsupervised learning algorithm inspired by the features of the neocortex that can be used to continuously process stream data and detect anomalies, without requiring a large amount of data for training nor requiring labeled data. HTM is also able to continuously learn from samples, providing a model that is always up-to-date with respect to observations. These characteristics make HTM particularly suitable for supporting online failure prediction in cloud systems, which are systems with a dynamically changing behavior that must be monitored to anticipate problems. This paper presents the first systematic study that assesses HTM in the context of failure prediction. The results that we obtained considering 72 configurations of HTM applied to 12 different types of faults introduced in the Clearwater cloud system show that HTM can help to predict failures with sufficient effectiveness (F-measure = 0.76), representing an interesting practical alternative to (semi-)supervised algorithms.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2021/08/case-study-facial-detection-and-recognition-for-always-on-applications-a-presentation-from-synopsys/
Jamie Campbell, Product Marketing Manager for Embedded Vision IP at Synopsys, presents the “Case Study: Facial Detection and Recognition for Always-On Applications” tutorial at the May 2021 Embedded Vision Summit.
Although there are many applications for low-power facial recognition in edge devices, perhaps the most challenging to design are always-on, battery-powered systems that use facial recognition for access control. Laptop, tablet and cellphone users expect hands-free and instantaneous facial recognition. This means the electronics must be always on, constantly looking to detect a face, and then ready to pull from a data set to recognize the face.
This presentation describes the challenges of moving traditional facial detection neural networks to the edge. It explores a case study of a face recognition access control application requiring continuous operation and extreme energy efficiency. Finally, it describes how the combination of Synopsys DesignWare ARC EM and EV processors provides low-power, efficient DSP and CNN acceleration for this application.
Through four use cases with examples, we describe how IEEE 1687 can be extended to include analog and mixed-signal chips, including linkage to circuit simulators on one end of the ecosystem and ATE on the other. The role of instrumentation, whether on the tester or on the device itself, is central to analog testing, and conveniently also the focal point of IEEE 1687. We identify enhancements to the modular netlist and test languages (ICL and PDL) to facilitate the description of the components involved in analog tests as well as the content of the tests themselves.
Surveillance scene classification using machine learningUtkarsh Contractor
The problem of scene classification in surveillance footage is of great importance for ensuring security in public areas. With challenges such as low quality feeds, occlusion, viewpoint variations, background clutter etc. The task is both challenging and error-prone. Therefore it is important to keep the false positives low to maintain a high accuracy of detection. In this paper, we adapt high performing CNN architectures to identify abandoned luggage in a surveillance feed. We explore several CNN based approaches, from Transfer Learning on the Imagenet dataset to object classification using Faster R-CNNs on the COCO dataset. Using network visualization techniques, we gain insight into what the neural network sees and the basis of classification decision. The experiments have been conducted on real world datasets, and highlights the complexity in such classifications. Obtained results indicate that a combination of proposed techniques outperforms the individual approaches.
Swimming upstream: OPNFV Doctor project case studyOPNFV
Based on the lifecycle of the OPNFV Doctor project, this case study shows how operator requirements “on paper” have successfully been realized step-by-step and in close cooperation with upstream community projects into a mature fault management framework. A demo of the solution had been presented in a keynote at the last OpenStack Summit. The talk will describe how we have worked in the OPNFV Doctor project and will provide some lessons learned on this journey. With significant experience now of working OPNFV requirements upstream to OpenStack, we’ll share best practices for submitting contributions upstream, how to best communicate, and how to overcome the primary challenges.
Challenges in Practicing High Frequency Releases in Cloud Environments Liming Zhu
Talk at RELENG 2014
Full paper: http://www.nicta.com.au/pub?doc=7925
The continuous delivery trend is dramatically shortening release cycles from months into hours. Applications with high frequency releases often rely heavily on automated deployment tools using cloud infrastructure APIs. We report some results from experiments on reliability issues of cloud infrastructure and trade-offs between using heavily-baked and lightly-baked images. Our experiments were based on Amazon Web Service (AWS) OpsWorks APIs and configuration management tool Chef. As a result of our experiments, we then propose error handling practices that can be included in tailor-made continuous deployment facilities.
More related info at our DevOps book http://www.ssrg.nicta.com.au/projects/devops_book/
The DevOps methodology integrates development and operations so that system changes can get rolled out quickly without causing unplanned downtime. Industrial organizations that successfully implement DevOps will have a strong advantage, but knowing how to get started can be a real challenge.
Keynote, ISSRE-13, St. Malo, France, November 4, 2004.
Outline: 21st Century IT Trends, Mobile Technology Crisis, Test Effectiveness Levels, Level 4 Case Study, Reliability Arithmetic, Test Performance Envelope.
Ensemble Launches Major Upgrade to NFV PlatformADVA
Our Ensemble virtualization software suite has been significantly enhanced and optimized for telco-scale NFV deployments. The new release largely focuses on improving the deployability and cost of universal customer premises equipment (uCPE) - still the most prevalent NFV use case. The new features and functions outlined in these slides are a direct result of customer input and our Ensemble team’s experience of multiple real-world NFV deployments.
Through four use cases with examples, we describe how IEEE 1687 can be extended to include analog and mixed-signal chips, including linkage to circuit simulators on one end of the ecosystem and ATE on the other. The role of instrumentation, whether on the tester or on the device itself, is central to analog testing, and conveniently also the focal point of IEEE 1687. We identify enhancements to the modular netlist and test languages (ICL and PDL) to facilitate the description of the components involved in analog tests as well as the content of the tests themselves.
Surveillance scene classification using machine learningUtkarsh Contractor
The problem of scene classification in surveillance footage is of great importance for ensuring security in public areas. With challenges such as low quality feeds, occlusion, viewpoint variations, background clutter etc. The task is both challenging and error-prone. Therefore it is important to keep the false positives low to maintain a high accuracy of detection. In this paper, we adapt high performing CNN architectures to identify abandoned luggage in a surveillance feed. We explore several CNN based approaches, from Transfer Learning on the Imagenet dataset to object classification using Faster R-CNNs on the COCO dataset. Using network visualization techniques, we gain insight into what the neural network sees and the basis of classification decision. The experiments have been conducted on real world datasets, and highlights the complexity in such classifications. Obtained results indicate that a combination of proposed techniques outperforms the individual approaches.
Swimming upstream: OPNFV Doctor project case studyOPNFV
Based on the lifecycle of the OPNFV Doctor project, this case study shows how operator requirements “on paper” have successfully been realized step-by-step and in close cooperation with upstream community projects into a mature fault management framework. A demo of the solution had been presented in a keynote at the last OpenStack Summit. The talk will describe how we have worked in the OPNFV Doctor project and will provide some lessons learned on this journey. With significant experience now of working OPNFV requirements upstream to OpenStack, we’ll share best practices for submitting contributions upstream, how to best communicate, and how to overcome the primary challenges.
Challenges in Practicing High Frequency Releases in Cloud Environments Liming Zhu
Talk at RELENG 2014
Full paper: http://www.nicta.com.au/pub?doc=7925
The continuous delivery trend is dramatically shortening release cycles from months into hours. Applications with high frequency releases often rely heavily on automated deployment tools using cloud infrastructure APIs. We report some results from experiments on reliability issues of cloud infrastructure and trade-offs between using heavily-baked and lightly-baked images. Our experiments were based on Amazon Web Service (AWS) OpsWorks APIs and configuration management tool Chef. As a result of our experiments, we then propose error handling practices that can be included in tailor-made continuous deployment facilities.
More related info at our DevOps book http://www.ssrg.nicta.com.au/projects/devops_book/
The DevOps methodology integrates development and operations so that system changes can get rolled out quickly without causing unplanned downtime. Industrial organizations that successfully implement DevOps will have a strong advantage, but knowing how to get started can be a real challenge.
Keynote, ISSRE-13, St. Malo, France, November 4, 2004.
Outline: 21st Century IT Trends, Mobile Technology Crisis, Test Effectiveness Levels, Level 4 Case Study, Reliability Arithmetic, Test Performance Envelope.
Ensemble Launches Major Upgrade to NFV PlatformADVA
Our Ensemble virtualization software suite has been significantly enhanced and optimized for telco-scale NFV deployments. The new release largely focuses on improving the deployability and cost of universal customer premises equipment (uCPE) - still the most prevalent NFV use case. The new features and functions outlined in these slides are a direct result of customer input and our Ensemble team’s experience of multiple real-world NFV deployments.
Future Internet: Managing Innovation and TestbedShinji Shimojo
Innovation is a big key word for ICT research and development. However, a road toward innovation is facing full of uncertainties and there are many obstacles. key elements to overcome these obstacles seems to be agile management of people, software and hardware. In addition, we think involvement of users in R&D will have much effect on the management of uncertainty in R&D. In this talk, I talk on our approach to this user involvement in JGN-X, an international future internet testbed and Knowledge Capital, Osaka, an smart city experimental testbed.
How to Operate Kubernetes CI/CD Pipelines at ScaleDevOps.com
In a recent survey of 500 attendees at Kubecon, Barcelona 76.7% responses identified CI/CD automation as the #1 use case for deploying Kubernetes. DevOps teams productivity and effectiveness depend on their ability to automate, operate, and manage CI/CD pipelines at scale. However, provisioning and managing many of the CI/CD components and the underlying Kubernetes clusters remains a largely manual process slowing down the team’s ability to deliver software faster. Furthermore, due to the lack of skills and operational complexity, managing the day 2 operations and lifecycle management of the end-to-end stack continues to be a daunting challenge.
Join Kamesh Pemmaraju, Head of Product Marketing at Platform9, Eric Bannon, Senior Product Manager at Platform9, and Mark Galpin, Senior Product Manager at JFrog to hear about how DevOps teams can
Configure, deploy, and run Kubenetes without the pain of managing it on any infrastructure of choice using the Platform9 Managed Kubernetes-as-a-Service
Deliver applications end-to-end using JFrog Pipelines for CI/CD automation, JFrog Artifactory for securely managing Docker images and other artifacts, and JFrog Xray for security and image scanning.
Conduct blue-green or canary production deployments
Deploy and configure Platform9 Managed Prometheus to monitor application performance as you roll out new features on a continuous basis.
We will have a live demo to show the above capabilities using a sample end-to-end application deployment.
Addressing the top 10 challenges of lte epc testingAricent
Summary presentation from a webinar by Aricent testing experts about the challenges of LTE EPC testing. Full webinar recording and presentation are available at http://info.aricent.com/LTE_EPC_Testing_Webinar_Sep14 _2011_ss.html
Domain-Aware Sentiment Classification with GRUs and CNNsGUANGYUAN PIAO
We describe a deep neural network architecture for domain-aware sentiment classication task with the purpose of the sentiment classication for product reviews in dierent domains and evaluating
nine pre-trained embeddings provided by the semantic sentiment
classication challenge at the 15th Extended Semantic Web Conference.
The proposed approach combines the domain and the sequence of word
embeddings of summary or text of each review for Gated Recurrent Units
(GRUs) to produce the corresponding sequence of embeddings by being
aware of the domain and previous words. Afterwards, it extracts local
features using Convolutional Neural Networks (CNNs) from the output of
the GRU layer. The two sets of local features extracted from the domainaware
summary and text of a review are concatenated into a single vector,
and are used for classifying sentiment of the given review. Our approach
obtained 0.9643 F1-score on the test set and achieved the 1st place in the
Semantic Sentiment Analysis Challenge at the 15th Extended Semantic
Web Conference.
WISE2017 - Factorization Machines Leveraging Lightweight Linked Open Data-ena...GUANGYUAN PIAO
With the popularity of Linked Open Data (LOD) and the associated rise in freely accessible knowledge that can be accessed via LOD, exploiting LOD for recommender systems has been widely studied based on various approaches such as graph-based or using different ma- chine learning models with LOD-enabled features. Many of the previous approaches require construction of an additional graph to run graph- based algorithms or to extract path-based features by combining user- item interactions (e.g., likes, dislikes) and background knowledge from LOD. In this paper, we investigate Factorization Machines (FMs) based on particularly lightweight LOD-enabled features which can be directly obtained via a public SPARQL Endpoint without any additional effort to construct a graph. Firstly, we aim to study whether using FM with these lightweight LOD-enabled features can provide competitive perfor- mance compared to a learning-to-rank approach leveraging LOD as well as other well-established approaches such as kNN-item and BPRMF. Secondly, we are interested in finding out to what extent each set of LOD-enabled features contributes to the recommendation performance. Experimental evaluation on a standard dataset shows that our proposed approach using FM with lightweight LOD-enabled features provides the best performance compared to other approaches in terms of five evalua- tion metrics. In addition, the study of the recommendation performance based on different sets of LOD-enabled features indicate that property- object lists and PageRank scores of items are useful for improving the performance, and can provide the best performance through using them together for FM. We observe that subject-property lists of items does not contribute to the recommendation performance but rather decreases the performance.
EKAW2016 - Interest Representation, Enrichment, Dynamics, and Propagation: A ...GUANGYUAN PIAO
Microblogging services such as Twitter have been widely
adopted due to the highly social nature of interactions they have facilitated. With the rich information generated by users on these services, user modeling aims to acquire knowledge about a user's interests, which is a fundamental step towards personalization as well as recommendations. To this end, researchers have explored dierent dimensions such as (1) Interest Representation, (2) Content Enrichment, (3) Temporal Dynamics of user interests, and (4) Interest Propagation using semantic information from a knowledge base such as DBpedia. However, those dimensions of user modeling have largely been studied separately, and there
is a lack of research on the synergetic eect of those dimensions for user modeling. In this paper, we address this research gap by investigating 16 different user modeling strategies produced by various combinations of those dimensions. Dierent user modeling strategies are evaluated in the context of a personalized link recommender system on Twitter. Results show that Interest Representation and Content Enrichment play crucial roles in user modeling, followed by Temporal Dynamics. The user mod-
eling strategy considering Interest Representation, Content Enrichment and Temporal Dynamics provides the best performance among the 16 strategies. On the other hand, Interest Propagation has little eect on user modeling in the case of leveraging a rich Interest Representation or considering Content Enrichment.
SEMANTiCS2016 - Exploring Dynamics and Semantics of User Interests for User ...GUANGYUAN PIAO
In this paper, we propose user modeling strategies which
use Concept Frequency - Inverse Document Frequency (CF-
IDF) as a weighting scheme and incorporate either or both
of the dynamics and semantics of user interests. To this end,
we first provide a comparative study on different user modeling strategies considering the dynamics of user interests in
previous literature to present their comparative performance.
In addition, we investigate different types of information (i.e.,
categories, classes and connected entities via various proper-
ties) for entities from DBpedia and the combination of them
for extending user interest profiles. Finally, we build our user
modeling strategies incorporating either or both of the best-
performing methods in each dimension. Results show that
our strategies outperform two baseline strategies significantly
in the context of link recommendations on Twitter.
UMAP2016EA - Analyzing MOOC Entries of Professionals on LinkedIn for User Mod...GUANGYUAN PIAO
The main contribution of this work is the comparison of three user modeling strategies based on job titles, educational fields and skills in LinkedIn proles, for personalized MOOC recommendations in a cold start situation. Results show that the skill-based user modeling strategy performs best, followed by the job- and edu-based strategies.
UMAP2016 - Analyzing Aggregated Semantics-enabled User Modeling on Google+ an...GUANGYUAN PIAO
In this paper, we study if reusing Google+ profiles can provide reliable recommendations on Twitter to resolve the cold start problem. Next, we investigate the impact of giving different weights for aggregating user profiles from two OSNs and present that giving a higher weight to the targeted OSN profile for aggregation allows the best performance in the context of a personalized link recommender system. Finally, we propose a user modeling strategy which combines entity-and category-based user profiles using with a discounting strategy. Results show that our proposed strategy improves the quality of user modeling significantly compared to the baseline method.
SAC2016-Measuring Semantic Distance for Linked Open Data-enabled Recommender ...GUANGYUAN PIAO
The Linked Open Data (LOD) initiative has been quite successful in terms of publishing and interlinking data on the Web. On top of the huge amount of interconnected data, measuring relatedness between resources and identifying their relatedness could be used for various applications such as LOD-enabled recommender systems. In this paper, we propose various distance measures, on top of the basic concept of Linked Data Semantic Distance (LDSD), for calculating Linked Data semantic distance between resources that can be used in a LOD-enabled recommender system.
We evaluated the distance measures in the context of a recommender system that provides the top-N recommendations with baseline methods such as LDSD. Results show that the performance is significantly improved by our proposed distance measures incorporating normalizations that use both of the resources and global appearances of paths in a graph.
Analyzing User Modeling on Twitter for Personalized News RecommendationsGUANGYUAN PIAO
Presentation for reading group 30/09/2015, Check out my recent work http://parklize.github.io/#research on User Modeling which is motivated by this presentation.
Chatty Kathy - UNC Bootcamp Final Project Presentation - Final Version - 5.23...John Andrews
SlideShare Description for "Chatty Kathy - UNC Bootcamp Final Project Presentation"
Title: Chatty Kathy: Enhancing Physical Activity Among Older Adults
Description:
Discover how Chatty Kathy, an innovative project developed at the UNC Bootcamp, aims to tackle the challenge of low physical activity among older adults. Our AI-driven solution uses peer interaction to boost and sustain exercise levels, significantly improving health outcomes. This presentation covers our problem statement, the rationale behind Chatty Kathy, synthetic data and persona creation, model performance metrics, a visual demonstration of the project, and potential future developments. Join us for an insightful Q&A session to explore the potential of this groundbreaking project.
Project Team: Jay Requarth, Jana Avery, John Andrews, Dr. Dick Davis II, Nee Buntoum, Nam Yeongjin & Mat Nicholas
Data Centers - Striving Within A Narrow Range - Research Report - MCG - May 2...pchutichetpong
M Capital Group (“MCG”) expects to see demand and the changing evolution of supply, facilitated through institutional investment rotation out of offices and into work from home (“WFH”), while the ever-expanding need for data storage as global internet usage expands, with experts predicting 5.3 billion users by 2023. These market factors will be underpinned by technological changes, such as progressing cloud services and edge sites, allowing the industry to see strong expected annual growth of 13% over the next 4 years.
Whilst competitive headwinds remain, represented through the recent second bankruptcy filing of Sungard, which blames “COVID-19 and other macroeconomic trends including delayed customer spending decisions, insourcing and reductions in IT spending, energy inflation and reduction in demand for certain services”, the industry has seen key adjustments, where MCG believes that engineering cost management and technological innovation will be paramount to success.
MCG reports that the more favorable market conditions expected over the next few years, helped by the winding down of pandemic restrictions and a hybrid working environment will be driving market momentum forward. The continuous injection of capital by alternative investment firms, as well as the growing infrastructural investment from cloud service providers and social media companies, whose revenues are expected to grow over 3.6x larger by value in 2026, will likely help propel center provision and innovation. These factors paint a promising picture for the industry players that offset rising input costs and adapt to new technologies.
According to M Capital Group: “Specifically, the long-term cost-saving opportunities available from the rise of remote managing will likely aid value growth for the industry. Through margin optimization and further availability of capital for reinvestment, strong players will maintain their competitive foothold, while weaker players exit the market to balance supply and demand.”
Adjusting primitives for graph : SHORT REPORT / NOTESSubhajit Sahu
Graph algorithms, like PageRank Compressed Sparse Row (CSR) is an adjacency-list based graph representation that is
Multiply with different modes (map)
1. Performance of sequential execution based vs OpenMP based vector multiply.
2. Comparing various launch configs for CUDA based vector multiply.
Sum with different storage types (reduce)
1. Performance of vector element sum using float vs bfloat16 as the storage type.
Sum with different modes (reduce)
1. Performance of sequential execution based vs OpenMP based vector element sum.
2. Performance of memcpy vs in-place based CUDA based vector element sum.
3. Comparing various launch configs for CUDA based vector element sum (memcpy).
4. Comparing various launch configs for CUDA based vector element sum (in-place).
Sum with in-place strategies of CUDA mode (reduce)
1. Comparing various launch configs for CUDA based vector element sum (in-place).
Techniques to optimize the pagerank algorithm usually fall in two categories. One is to try reducing the work per iteration, and the other is to try reducing the number of iterations. These goals are often at odds with one another. Skipping computation on vertices which have already converged has the potential to save iteration time. Skipping in-identical vertices, with the same in-links, helps reduce duplicate computations and thus could help reduce iteration time. Road networks often have chains which can be short-circuited before pagerank computation to improve performance. Final ranks of chain nodes can be easily calculated. This could reduce both the iteration time, and the number of iterations. If a graph has no dangling nodes, pagerank of each strongly connected component can be computed in topological order. This could help reduce the iteration time, no. of iterations, and also enable multi-iteration concurrency in pagerank computation. The combination of all of the above methods is the STICD algorithm. [sticd] For dynamic graphs, unchanged components whose ranks are unaffected can be skipped altogether.
Env2Vec: Accelerating VNF Testing with Deep Learning
1. Env2Vec: Accelerating VNF
Testing with Deep Learning
Guangyuan Piao, Pat Nicholson, Diego Lugones
Nokia Bell Labs, Dublin, Ireland
The 15th European Conference on Computer Systems, 30/04/2020
10. Q & A
Contact information:
Guangyuan Piao: guangyuan.piao@nokia-bell-labs.com
Pat Nicholson: pat.nicholson@nokia-bell-labs.com
Diego Lugones: diego.lugones@nokia-bell-labs.com