Testing is important for any system you write and at eBay it is no different. We have a number of complex Scala and Akka based applications with a large number of external dependencies. One of the challenges of testing this kind of application is replicating the complete system across all your environments: development, different flavors of testing (unit, functional, integration, capacity and acceptance) and production. This is especially true in the case of integration and capacity testing where there are a multitude of ways to manage system complexity. Wouldn’t it be nice to define the testing system architecture in one place that we can reuse in all our tests? It turns out we can do exactly that using Docker. In this talk, we will first look at how to take advantage of Docker for integration testing your Scala application. After that we will explore how this has helped us reduce the duration and complexity of our tests.
Service discovery in mesos miguel, Angel GuillenJ On The Beach
Mesos uses zookeeper as service discovery but sometimes your applications don't support zookeeper for service discovery, or you need to run some legacy applications that you cannot modify the source, for that I research and test different systems for service discovery in Mesos/Marathon system. I will explain the problems and advantages of the different current solutions that I tested.
Infrastructure Deployment with Docker & AnsibleRobert Reiz
This is an introduction to Docker & Ansible. It shows how Ansible can be used as orchestration too for Docker. There are 2 real world examples included with code examples in a Gist.
Service discovery in mesos miguel, Angel GuillenJ On The Beach
Mesos uses zookeeper as service discovery but sometimes your applications don't support zookeeper for service discovery, or you need to run some legacy applications that you cannot modify the source, for that I research and test different systems for service discovery in Mesos/Marathon system. I will explain the problems and advantages of the different current solutions that I tested.
Infrastructure Deployment with Docker & AnsibleRobert Reiz
This is an introduction to Docker & Ansible. It shows how Ansible can be used as orchestration too for Docker. There are 2 real world examples included with code examples in a Gist.
Docker compose è uno strumento che permette di creare e gestire ambienti di sviluppo e test in modo semplice e ripetibile.
Vediamo come creare un ambiente di sviluppo per node di livello enterprise, che ci permetta di automatizzare task e testare in modo efficace il nostro codice
Nowadays we cannot imagine development without Continuous Integration, the advance level of software engineering is Continuous Delivery. There are a lot of noise around this topic however successful implementations are still rare.
In this topic I'm going to share how to implement CI/CD in simple and efficient way using Fabric8.
Custom deployments with sbt-native-packagerGaryCoady
sbt-native-packager offers a comprehensive approach to packaging artifacts with SBT. The user describes a generic layout, which can then be extended for different types of software and deployments. For example, it is flexible enough to describe both a Zip-based archive format, and an RPM package with appropriate Systemd configuration for a service.
This talk will cover the essentials needed to understand the design of sbt-native-packager, and how to extend its structure to create custom layouts and deployments.
Docker compose è uno strumento che permette di creare e gestire ambienti di sviluppo e test in modo semplice e ripetibile.
Vediamo come creare un ambiente di sviluppo per node di livello enterprise, che ci permetta di automatizzare task e testare in modo efficace il nostro codice
Nowadays we cannot imagine development without Continuous Integration, the advance level of software engineering is Continuous Delivery. There are a lot of noise around this topic however successful implementations are still rare.
In this topic I'm going to share how to implement CI/CD in simple and efficient way using Fabric8.
Custom deployments with sbt-native-packagerGaryCoady
sbt-native-packager offers a comprehensive approach to packaging artifacts with SBT. The user describes a generic layout, which can then be extended for different types of software and deployments. For example, it is flexible enough to describe both a Zip-based archive format, and an RPM package with appropriate Systemd configuration for a service.
This talk will cover the essentials needed to understand the design of sbt-native-packager, and how to extend its structure to create custom layouts and deployments.
Idiomatic Scala code uses types to represent such ideas as "parallel execution"(Future), "the possible absence of a value" (Option), and"either the result or an error" (Try/Either/Scalaz Disjunction). These types are both common and useful, but when they are combined, a lot of repetitive boilerplate is usually added to your code.
We will explore this topic, and will attempt to combine parallel execution with error handling, in the context of a web application, trying to make both the business logic, and error handling, obvious and clean.
Når et software-produkt løbende bliver forbedret, så vil vi jo gerne give brugerne forbedringerne i hænderne så hurtigt og let som muligt. Continuous Delivery er automatiske test og automatiseret deployment strikket sammen, så I bare skal lave lækre features og checke ind - så klarer automatikken resten.
Vi ser på hvordan man sætter den nødvendige infrastruktur og automatik op, så de gode tilføjelser og rettelser til koden hurtigt og automatisk flyder ud til slutbrugerne - og de knap så gode bliver stoppet undervejs.
JoJo Pymes - Plan De Marketing Online Para Pequeñas Y Medianas EmpresasJoJo Digital
JoJo Pymes - Plan De Marketing Online Para Pequeñas Y Medianas Empresas. Conozca Nuestros Planes Mensuales De Marketing Online Y Posicione Su Negocio Por Entre Sus Competidores Al Tener Su Propio Website Premium Y Al Mejor Costo.
Presentación realizada por Diego Antista sobre las tendencias actuales del Marketing, las tendencias de búsqueda online y las posibilidades que ofrece Google para anunciar en la Web.
Clínica de Marketing Digital Facultad de Diseño y Comunicación, Universidad de Palermo, mayo de 2010.
DCSF 19 Building Your Development Pipeline Docker, Inc.
Oliver Pomeroy, Docker & Laura Tacho, Cloudbees
Enterprises often want to provide automation and standardisation on top of their container platform, using a pipeline to build and deploy their containerized applications. However this opens up new challenges; Do I have to build a new CI/CD Stack? Can I build my CI/CD pipeline with Kubernetes orchestration? What should my build agents look like? How do I integrate my pipeline into my enterprise container registry? In this session full of examples and how-to's, Olly and Laura will guide you through common situations and decisions related to your pipelines. We'll cover building minimal images, scanning and signing images, and give examples on how to enforce compliance standards and best practices across your teams.
Docker and Puppet for Continuous IntegrationGiacomo Vacca
Today developers want to change the code, build and deploy often, even several times per day.
New versions of software may need to be tested on different distributions, and with different configurations.
Achieving this with Virtual Machines it’s possible, but it’s very resource and time consuming. Docker provides an incredibly good solution for this, in particular if combined with Continuous Integration tools like Jenkins and Configuration Management tools like Puppet.
This presentation focuses on the opportunities to configure automatically Docker images, use Docker containers as disposable workers during your tests, and even running your Continuous Integration system inside Docker.
The Future of Security and Productivity in Our Newly Remote WorldDevOps.com
Andy has made mistakes. He's seen even more. And in this talk he details the best and the worst of the container and Kubernetes security problems he's experienced, exploited, and remediated.
This talk details low level exploitable issues with container and Kubernetes deployments. We focus on lessons learned, and show attendees how to ensure that they do not fall victim to avoidable attacks.
See how to bypass security controls and exploit insecure defaults in this technical appraisal of the container and cluster security landscape.
This presentation gives a brief understanding of docker architecture, explains what docker is not, followed by a description of basic commands and explains CD/CI as an application of docker.
Docker introduction.
References : The Docker Book : Containerization is the new virtualization
http://www.amazon.in/Docker-Book-Containerization-new-virtualization-ebook/dp/B00LRROTI4/ref=sr_1_1?ie=UTF8&qid=1422003961&sr=8-1&keywords=docker+book
DCEU 18: Building Your Development PipelineDocker, Inc.
Oliver Pomeroy - Solution Engineer, Docker
Laura Frank Tacho - Director of Engineering, CloudBees
Enterprises often want to provide automation and standardisation on top of their container platform, using a pipeline to build and deploy their containerized applications. However this opens up new challenges… Do I have to build a new CI/CD Stack? Can I build my CI/CD pipeline with Kubernetes orchestration? What should my build agents look like? How do I integrate my pipeline into my enterprise container registry? In this session full of examples and “how-to”s, Olly and Laura will guide you through common situations and decisions related to your pipelines. We’ll cover building minimal images, scanning and signing images, and give examples on how to enforce compliance standards and best practices across your teams.
Настройка окружения для кросскомпиляции проектов на основе docker'acorehard_by
Как быстро и легко настраивать/обновлять окружения для кросскомпиляции проектов под различные платформы(на основе docker), как быстро переключаться между ними, как используя эти кирпичики организовать CI и тестирование(на основе GitLab и Docker).
PuppetConf 2017: What’s in the Box?!- Leveraging Puppet Enterprise & Docker- ...Puppet
“Docker, Docker, Docker.” It’s a phrase we hear often, but what are containers, what can they be used for, and why should you know more about them? In this session, Grace (Puppet) and Tricia (AppDynamics) will introduce attendees to Docker and help them build and deploy their first container with Puppet. They will leverage the docker_image_build module from the Puppet Forge and take attendees through the proper workflow for coupling Docker and Puppet together. The session will focus on how to use some of the newest Docker features, such as multi-stage build files and password stores within Docker so you can pass "secrets" to a swarm for login credentials. The goal is to provide newcomers with a working proficiency of how to get started deploying containers using Puppet as their automation tool.
Introduction to Docker at the Azure Meet-up in New YorkJérôme Petazzoni
This is the presentation given at the Azure New York Meet-Up group, September 3rd.
It includes a quick overview of the Open Source Docker Engine and its associated services delivered through the Docker Hub. It also covers the new features of Docker 1.0, and briefly explains how to get started with Docker on Azure.
Introduction to Docker, December 2014 "Tour de France" EditionJérôme Petazzoni
Docker, the Open Source container Engine, lets you build, ship and run, any app, anywhere.
This is the presentation which was shown in December 2014 for the "Tour de France" in Paris, Lille, Lyon, Nice...
Massively scalable ETL in real world applications: the hard wayJ On The Beach
Big Data examples always give the correct answers. However, in the real world, Big Data might be corrupt, contradictory or consist of so many small files it becomes extremely hard to keep track - let alone scale. A solid architecture will help to overcome many of the difficulties.
Floris will talk about a real-world implementation of a massively scalable ETL architecture. Two years ago, at the time of the implementation, Airflow just became part of Apache and still left many features to be desired for. However, requirements from the start were thousands of ETL tasks per day on average, but on occasion, this could become hundreds of thousands. The script-based method that was in place was already not capable to meet the requirements on a day to day basis and needed to be replaced as soon as possible. So this custom framework was rolled out in just 8 weeks of development time.
Traditional Big Data is done on Data you have. You load the data into a repository and perform map reduce or other style calculations on the data. However, certain industries need to perform complex operations on data you might not have. Data you can acquire, Data that can be shared with you, and Data that you can model are all types of data you may not have but may need to integrate instantly into a complex data analysis. Problem is: you may not even know you need this data until deep into the execution stack at runtime. This talk discusses a new functional language paradigm for dealing naturally with data you don’t have and about how to make all data first-class citizens, regardless of whether you have it or you don’t, and we will give a demo of a project written in Scala to deal exactly with this issue.
Acoustic Time Series in Industry 4.0: Improved Reliability and Cyber-Security...J On The Beach
Industry 4.0, aka the "Fourth Industrial Revolution," refers to the computerization of manufacturing. One important aspect of Industry 4.0 is the ability to monitor the health and reliability of a physical manufacturing plant using low-cost IoT sensors. For example, machine learning models can be trained to predict the physical degradation of a manufacturing system as a function of acoustic measurements obtained from strategically placed microphones; however, the same acoustic measurements can be used to reverse engineer proprietary information about the manufacturing process and/or precisely what is being manufactured at the time of recording. Thus, improved reliability and fault tolerance is achieved at the cost of what appears to be an unprecedented new class of security vulnerabilities related to the acoustic side channel.
As a case study, we report a novel acoustic side channel attack against a commercial DNA synthesizer, a commonly used instrument in fields such as synthetic biology. Using a smart phone-quality microphone placed on or in the near vicinity of a DNA synthesizer, we were able to determine with 88.07% accuracy the sequence of DNA being produced; using a database of biologically relevant known-sequences, we increased the accuracy of our model to 100%. An academic or industrial research project may use the synthetic DNA to engineer an organism with desired traits or functions; however, while the organism is still under development, prior to publication, patent, and/or copyright, the research remains vulnerable to academic intellectual property theft and/or industrial espionage. On the other hand, this attack could also be used for benevolent purposes, for example, to determine whether a suspected criminal or terrorist is engineering a harmful pathogen. Thus, it is essential to recognize both the benefits and risks inherent to the cyber-physical systems that will inevitably control Industry 4.0 manufacturing processes and to take steps to mitigate them whenever possible.
Where is the edge in IoT and how much can you do there? Data collection? Analytics? I’ll show you how to build and deploy an embedded IoT edge platform that can do data collection, analytics, dashboarding and much more. All using Open Source.
As IoT deployments move forward, the need to collect, analyze, and respond to data further out on the edge becomes a critical factor in the success – or failure – of any IoT project. Network bandwidth costs may be dropping, and storage is cheaper than ever, but at IoT scale, these costs can still quickly overrun a project’s budget and ultimately doom it to failure.
The more you centralize your data collection and storage, the higher these costs become. Edge data collection and analysis can dramatically lower these costs, plus decrease the time to react to critical sensor data. With most data platforms, it simply isn’t practical, or even possible, to push collection AND analytics to the edge. In this talk I’ll show how I’ve done exactly this with a combination of open source hardware – Pine64 – and open source software – InfluxDB – to build a practical, efficient and scalable data collection and analysis gateway device for IoT deployments. The edge is where the data is, so the edge is where the data collection and analytics needs to be.
Drinking from the firehose, with virtual streams and virtual actorsJ On The Beach
Event Stream Processing is a popular paradigm for building robust and performant systems in many different domains, from IoT to fraud detection to high-frequency trading. Because of the wide range of scenarios and requirements, it is difficult to conceptualize a unified programming model that would be equally applicable to all of them. Another tough challenge is how to build streaming systems with cardinalities of topics ranging from hundreds to billions while delivering good performance and scalability.
In this session, Sergey Bykov will talk about the journey of building Orleans Streams that originated in gaming and monitoring scenarios, and quickly expanded beyond them. He will cover the programming model of virtual streams that emerged as a natural extension of the virtual actor model of Orleans, the architecture of the underlying runtime system, the compromises and hard choices made in the process. Sergey will share the lessons learned from the experience of running the system in production, and future ideas and opportunities that remain to be explored.
Over the last twenty years, there has been a paradigm shift in software development: from meticulously planned release cycles to an experimental way of working in which lead times are becoming shorter and shorter.
How can Java ever keep up with this trend when we have Docker containers that are several hundred megabytes in size, with warm-up times of ten minutes or longer? In this talk, I'll demonstrate how we can use Quarkus so that we can create super small, super fast Java containers! This will give us better possibilities for scaling up and down - which can be a game-changer, especially in a serverless environment. It will also provide the shortest possible lead times, as well as a much better use of cloud performance with the added bonus of lower costs.
When Cloud Native meets the Financial SectorJ On The Beach
We live in our own bubble of microservices and endlessly horizontal scaling infrastructure, but there is still critical infrastructure that runs the world of financial systems depending on Windows boxes, FTP servers, and single-threaded protocols. This talk is about how to glue these two worlds together, what works for us and what doesn't.
The advancement of technology in the last decade or so has allowed astronomy to see exponential growth in data volumes. ESA's space telescope Euclid will gather high-resolution images of a third of the sky, ~850GB of data downloaded daily for 6 years, by 2032 ground-based telescope LSST will have generated 500PB of data and the radio telescope SKA will be producing more data per second than the entire internet worldwide. This talk will address the questions of what current techniques exist to address big data volumes, how the astronomical community will prepare for this big data wave, and what other challenges lie ahead?
The world is moving from a model where data sits at rest, waiting for people to make requests of it, to where data is constantly moving and streams of data flow to and from devices with or without human interaction. Decisions need to be made based on these streams of data in real-time, models need to be updated, and intelligence needs to be gathered. In this context, our old-fashioned approach of CRUD REST APIs serving CRUD database calls just doesn't cut it. It's time we moved to a stream-centric view of the world.
The TIPPSS Imperative for IoT - Ensuring Trust, Identity, Privacy, Protection...J On The Beach
Our increasingly connected world leveraging the Internet of Things (IoT) creates great value, in connected healthcare, smart cities, and more. The increasing use of IoT also creates great risk. We will discuss the challenges and risks we need to address as developers in TIPPSS - Trust, Identity, Privacy, Protection, Safety, and Security - for devices, systems and solutions we deliver and use. Florence leads IEEE workstreams on clinical IoT and data interoperability with blockchain addressing TIPPSS issues. She is an author of IEEE articles on "Enabling Trust and Security - TIPPSS for IoT" and "Wearables and Medical Interoperability - the Evolving Frontier", "TIPPSS for Smart Cities" in the 2017 book "Creating, Analysing and Sustaining Smarter Cities: A Systems Perspective" , and Editor in Chief for an upcoming book on "Women Securing the Future with TIPPSS for IoT."
Pushing AI to the Client with WebAssembly and BlazorJ On The Beach
Want to run your AI algorithms directly in the browser on the client-side? Now you can with WebAssembly and Blazor. Join us as we write code directly in WebAssembly. Then, we’ll look at Blazor and how you can use it, along with WebAssembly to run your tooling client side in the browser.
Want to run your AI algorithms directly in the browser on the client-side without the need for transpilers or browser plug-ins? Well, now you can with WebAssembly and Blazor. WebAssembly (WASM) is the W3C specification that will be used to provide the next generation of development tools for the web and beyond. Blazor is Microsoft’s experiment that allows ASP.Net developers to create web pages that do much of the scripting work in C# using WASM. Come join us as we learn to write code directly in WebAssembly’s human-readable format. Then, we’ll look at the current state of Blazor and how you can use it, along with WebAssembly to run your tooling client side in the browser.
RAFT protocol is a well-known protocol for consensus in Distributed Systems. Want to learn how consensus is achieved in a system with a large amount of data such as Axon Server’s Event Store? Join this talk to hear about all specifics regarding data replication in highly available Event Store!
Axon is a free and open source Java framework for writing Java applications following DDD, event sourcing, and CQRS principles. While especially useful in a microservices context, Axon provides great value in building structured monoliths that can be broken down into microservices when needed.
Axon Server is a messaging platform specifically built to support distributed Axon applications. One of its key benefits is storing events published by Axon applications. In not so rare cases, the number of these events is over millions, even billions. Availability of Axon Server plays a significant role in the product portfolio. To keep event replication reliable we chose RAFT protocol for consensus implementation of our clustering features.
In short, consensus involves multiple servers agreeing on values. Once they reach a decision on a value, that decision is final. Typical consensus algorithms make progress when any majority of their servers is available; for example, a cluster of 5 servers can continue to operate even if 2 servers fail. If more servers fail, they stop making progress (but will never return an incorrect result).
Join this talk to learn why we chose RAFT; what were our findings during the design, the implementation, and testing phase; and what does it mean to replicate an event store holding billions of events!
The Six Pitfalls of building a Microservices Architecture (and how to avoid t...J On The Beach
Thinking of moving to Microservices? Watch out! That quest is full of traps, social traps. If you are not able to handle it, you may be blocked by meetings, frustration, endless challenges that will make you miss the monolith. In this talk, I share my experience and mistakes, so you can avoid them.
Creating or migrating to a Microservices architecture might easily become a big mess, not only due to technical challenges but mostly because of human factors: it’s a major change in the software culture of a company. In this talk, I’ll share my past experience as the technical lead of an ambitious Microservices-based product, I’ll go through the parts we struggled with, and give you some advice on how to deal with what I call the Six Pitfalls:
The Common Patterns Phobia
The Book Club Cult
The Never-Decoupled Story
The Buzz Words Syndrome
The Agile Trap
The Conway’s Law Hackers
Instead of randomly injecting faults ( i.e. Chaos Monkey), what if we could order our experiments to perform min number of experiments for maximum yield? We present a solution(& results) to the problem of experiment selection using Lineage Driven Fault Injection to reduce the search space of faults.
Lineage Driven Fault Injection (LDFI) is a state of the art technique in chaos engineering experiment selection. LDFI since its inception has used an SAT solver under the hood which presents solutions to the decision problem (which faults to inject) in no particular order. As SRE’s we would like to perform experiments that reveal the bugs that the customers are most likely to hit first. In this talk, we present new improvements to LDFI that orders the experiment suggestions.
In the first the half of the talk we will show LDFI is a technique that can be widely used within an enterprise. We present the motivation for ordering the chaos experiments along with some prioritization we utilized while conducting the experiments. We also highlight how ordering is a general purpose technique that we can use to encode the peculiarities of a heterogeneous microservices architecture. LDFI can work in an enterprise by harnessing the observability infrastructure to model the redundancy of the system.
Next, we present experiments conducted within our organization using ordered LDFI and some preliminary results. We show examples of services where we discovered bugs, and how carefully controlling the order of experiments allowed LDFI to avoid running unnecessary experiments. We also present an example of an application where we declared the service shippable under crash stop model. We also present a comparison with Chaos Monkey and show how LDFI found the known bugs in a given application using orders of magnitude fewer experiments than a random fault injection tool like Chaos Monkey.
Finally, we discuss how we plan to take LDFI forward. We discuss open problems and possible solutions for scalarizing probabilities of failure, latency injection, integration with service mesh technologies like envoy for fine-grained fault injection, fault injection for stateful systems.
Key takeaways: 1) Understand how LDFI can be integrated in the enterprise by harnessing the observability infrastructure. 2) Limitations of LDFI w.r.t unordered solutions and why ordering matters for chaos engineering experiments. 3) Preliminary results of prioritized LDFI and a future direction for the community.
Complexity in systems should be defeated if it is possible to do. But the default nature of our computer systems are complex and servers are doomed to fail. In this talk, we will go through new approaches in modern architectures to design and evaluate new computer systems.
Interaction Protocols: It's all about good mannersJ On The Beach
Distributed systems collaborate to achieve collective goals via a system of rules. Rules that affords good hygiene, fault tolerance, effective communication and trusted feedback. These rules form protocols which enable the system to achieve its goals.
Distributed and concurrent systems can be considered a social group that collaborates to achieve collective goals. In order to collaborate a system of rules must be applied, that affords good hygiene, fault tolerance, and effective communication to coordinate, share knowledge, and provide feedback in a polite trusted manner. These rules form a number of protocols which enable the group to act as a system which is greater than the sum of the individual components.
In this talk, we will explore the history of protocols and their application when building distributed systems.
A race of two compilers: GraalVM JIT versus HotSpot JIT C2. Which one offers ...J On The Beach
Do you want to check the efficiency of the new, state of the art, GraalVM JIT Compiler in comparison to the old but mostly used JIT C2? Let’s have a side by side comparison from a performance standpoint on the same source code.
The talk reveals how traditional Just In Time Compiler (e.g. JIT C2) from HotSpot/OpenJDK internally manages runtime optimizations for hot methods in comparison to the new, state of the art, GraalVM JIT Compiler on the same source code, emphasizing all of the internals and strategies used by each Compiler to achieve better performance in most common situations (or code patterns). For each optimization, there is Java source code and corresponding generated assembly code in order to prove what really happens under the hood.
Each test is covered by a dedicated benchmark (JMH), timings and conclusions. Main topics of the agenda: - Scalar replacement - Null Checks - Virtual calls - Lock coarsening - Lock elision - Virtual calls - Scalar replacement - Lambdas - Vectorization (few cases)
The tools used during my research study are JITWatch, Java Measurement Harness, and perf. All test scenarios will be launched against the latest official Java release (e.g. version 11).
Leadership is easy when you're a manager, or an expert in a field, or a conference speaker! In a Kanban organisation, though, we "encourage acts of leadership at every level". In this talk, we look at what it means to be a leader in the uncertain, changing and high-learning environment of software development. We learn about the importance of safety in encouraging others to lead and follow, and how to get that safety using both technical and human practices; the necessity of a clear, compelling vision and provision of information on how we're achieving it; and the need to be able to ask awkward and difficult questions... especially the ones without easy answers.
Machine Learning: The Bare Math Behind LibrariesJ On The Beach
During this presentation, we will answer how much you’ll need to invest in a superhero costume to be as popular as Superman. We will generate a unique logo which will stand against the ever popular Batman and create new superhero teams. We shall achieve it using linear regression and neural networks.
Machine learning is one of the hottest buzzwords in technology today as well as one of the most innovative fields in computer science – yet people use libraries as black boxes without basic knowledge of the field. In this session, we will strip them to bare math, so next time you use a machine learning library, you’ll have a deeper understanding of what lies underneath.
During this session, we will first provide a short history of machine learning and an overview of two basic teaching techniques: supervised and unsupervised learning.
We will start by defining what machine learning is and equip you with an intuition of how it works. We will then explain the gradient descent algorithm with the use of simple linear regression to give you an even deeper understanding of this learning method. Then we will project it to supervised neural networks training.
Within unsupervised learning, you will become familiar with Hebb’s learning and learning with concurrency (winner takes all and winner takes most algorithms). We will use Octave for examples in this session; however, you can use your favourite technology to implement presented ideas.
Our aim is to show the mathematical basics of neural networks for those who want to start using machine learning in their day-to-day work or use it already but find it difficult to understand the underlying processes. After viewing our presentation, you should find it easier to select parameters for your networks and feel more confident in your selection of network type, as well as be encouraged to dive into more complex and powerful deep learning methods.
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Tobias Schneck
As AI technology is pushing into IT I was wondering myself, as an “infrastructure container kubernetes guy”, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefit’s both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
Empowering NextGen Mobility via Large Action Model Infrastructure (LAMI): pav...
Scala, docker and testing, oh my! mario camou
1. Daniel Brown & Mario Camou
Scala, Docker
and Testing
oh my!
[Road]
2. Who are we?
Daniel Brown
Software engineer at eBay
Tinkerer and
hacker of electronic devices
Resident mad scientist
@_dlpb / https://github.com/dlpb
Mario Camou
Software engineer at eBay
3D printing enthusiast
and Doctor Who fan
“Do what I do. Hold tight and pretend it’s
a plan!”
—The Doctor, Season 7, Christmas
Special
@thedoc / https://github.com/mcamou/
3. Agenda
● Introduction to Docker
● Why would you use Docker in tests?
● How do you integrate Docker into a Scala project?
● Lessons learned
11. Enter Docker
Dependencies
● Create images of dependencies
● Doesn't solve all setup issues
● But only needs to be done once
● Less storage overhead than VM
● Databases and webservers = GBs of
data
[Docker]
20. The Monolith
A common way of working
● Everything is “in the library”
● Antipatterns
● Black Magic
21. Identify Services we are Dependent Upon
This can be the tricky step when everything is “in the framework”
● May require the use of debuggers, network sniffers etc
● Investment is worth it in the long run
22. Identify Services we are Dependent Upon
Move configuration of services out to easily controllable (and
versionable) config
● Implement, or update, a client to use the new config
● Decoupling production from tests
24. Stub the Contract
Golden Rule: Keep it Simple!
● Have one response per stub
● Keep them versioned
● Put them in containers
25. Run your Tests
Now that we know our dependencies, contracts, and have stubs, we can
write tests
● These are focussed only on testing that our system behaves as
expected when downstream services offer varying responses
● We are NOT testing the downstream services
26. Run your Tests
● Create a number of different stubs in different docker images
● Spin up the ones you need for your specific test
● When you are done with the test, tear them down and start again
27. So How do you go about it?So, how do we go about it?
[Hands]
29. Normal Docker flow
● Create a Dockerfile
● Build the Docker image
● Push it to the registry
● Pull the image from every server
30. Normal Docker flow
● No static checking of the Dockerfile
● Artifact build is separate from image build
○ Do you have the right artifacts?
○ Did you run the tests before building?
○ Are the latest bits in the image?
32. Integrating Docker with sbt
sbt-docker
An sbt plugin that:
● Creates a Dockerfile
● Creates an image based on that file
● Pushes the image to a registry
https://github.com/marcuslonnberg/sbt-docker
33. Setting the Image Name
imageNames in docker := Seq(
ImageName(
namespace = Some("myOrg"),
repository = name.value,
tag = Some(s"v${version.value}")
),
ImageName(
namespace = Some("myOrg"),
repository = name.value,
tag = Some("latest")
)
)
34. Some Useful vals
val artifact = (assemblyOutputPath in assembly).value
val baseDir = "/srv"
val preInstall = Seq(
"/usr/bin/apt-get update",
s"/usr/sbin/useradd -r -s /bin/false -d $baseDir myUser"
).mkString(" && ")
38. Caveats and Recommendations
● Create a fat JAR: sbt-assembly or sbt-native-packager
● In non-Linux platforms, start up docker-machine and set up the
environment variables before starting sbt:
$ docker-machine start theMachine
$ eval $(docker-machine env theMachine)
https://velvia.github.io/Docker-Scala-Sbt/
40. Integration Testing with Docker
Create Docker image(s) containing stubbed external resources
● Tests always run with clean data
● Resource startup is standardized
● Does not require multiple VMs or network calls
41. Integration Testing with Docker
Before your tests:
● Start up the stubbed resource containers
● Wait for the containers to start
After your tests:
● Stop the containers
Can we automate all of this?
43. Container Orchestration during Tests
Orchestration platforms
● Docker Compose
● Kubernetes
● …
Orchestrate inside the test code
44. Using the Docker Java API
val config = DockerClientConfig.createDefaultConfigBuilder()
.withServerAddress("tcp://192.168.99.100:2376")
.withDockerCertPath("/path/to/certificates")
.build
val docker = DockerClientBuilder.getInstance(config).build
val callback = new PullImageResultCallback
docker.pullImageCmd("mongo:2.6.12").exec(callback)
callback.awaitSuccess
val container = docker.createContainerCmd("mongo:2.6.12")
.withCmd("mongod", "--nojournal", "--smallfiles", "--syncdelay", "0")
.exec
docker.startContainerCmd(container.getId).exec
docker.stopContainerCmd(container.getId).exec
val exitcode = docker.waitContainerCmd(container.getId).exec
45. Scala-native Solutions
reactive-docker and tugboat
● Use the Docker REST API directly -> versioning problems
● Unmaintained for > 1 year (not updated to Docker 1.2 API)
● No TLS support
46. Using reactive-docker
implicit val docker = Docker("192.168.99.100", 2375)
val timeout = 30.seconds
val name = "mongodb-test"
val cmd = Seq("mongod", "--nojournal", "--smallfiles", "--syncdelay",
"0")
val cfg = ContainerConfiguration(Some("mongo:2.6.12"), Some(cmd))
val (containerId, _) = Await.result(
docker.containerCreate("mongo:2.6.5", cfg, Some(name)), timeout)
val started = Await.result(docker.containerStart(containerId), timeout)
val stopped = Await.ready(docker.containerStop(containerId), timeout)
https://github.com/almoehi/reactive-docker
47. Caveats and Recommendations
● Use beforeAll to ensure containers start up before tests
● Use afterAll to ensure containers stop after tests
Caveats:
● Multiple container start/stops can make tests run much slower
● Need to check when resource (not just container) is up
● Single start/stop means testOnly/testQuick will start up all
resources
● Ctrl+C will not stop the stubbed resource containers
48. Introducing docker-it-scala
● Uses the (official) docker-java library
● Starts up resource containers in parallel
○ Only when they are needed
○ Once when the tests start
○ Waits for service (not just container) startup
● Automatically shuts down all started stub containers
● Configured via code or via Typesafe Config
https://github.com/whisklabs/docker-it-scala
[Docker-It-Scala]
49. Defining a Resource Container
● Resources are declared as traits and mixed into tests
● Sample implementations available for Cassandra, ElasticSearch,
Kafka, MongoDB, Neo4j, PostgreSQL, Zookeeper (in the docker-
testkit-samples package)
[Docker-It-Scala]
53. Writing Your Tests
class MyMongoSpec extends FunSpec with DockerMongodbService {
// Test assumes the MongoDB container is running
}
class MyPostgresSpec extends FunSpec with DockerNeo4jService {
// Test assumes the Neo4j container is running
}
class MyAllSpec extends FunSpec with DockerMongodbService
with DockerNeo4jService with DockerPostgresService{
// Test assumes all 3 containers are running
}
https://github.com/whisklabs/docker-it-scala
[Docker-It-Scala]
58. Performance measurements (from Stubbed Tests)
Create mocks that can simulate (or approximate) real-world conditions
● E.g.
○ Drop every third request
○ Delay 5 seconds before responding
○ Respond instantly for every request
59. Performance measurements (from Stubbed Tests)
Use these new stubs to gather data about how your system performs
● When it is stressed;
● When downstream services are stressed
60. Performance measurements (from Stubbed Tests)
Use these new stubs to gather data about how your system performs
● When it is stressed;
● When downstream services are stressed
Gather the metrics!
63. Going Further
What did we decide from the graph?
● Technical limitations for our product
64. Going Further
What did we decide from the graph?
● Technical limitations for our product
● Business policies for the product
65. Summary
Isolation is key to gathering meaningful test data and keeping testing
strategies sane
Docker can ease the pain of managing stub dependencies during
integration testing
Meaningful tests can tell you a lot about the behaviour of your system
and therefore influence both architecture and UX
67. Acknowledgements & Notes
[Docker] Docker: www.docker.com
Docker and the Docker logo are trademarks or registered trademarks of
Docker, Inc. in the United States and/or other countries. Docker, Inc. and
other parties may also have trademark rights in other terms used herein.
[Fire], Fire image, Creative Commons CC0 License https://www.pexels.
com/photo/fire-orange-emergency-burning-1749/
[Clock], Clock Image, Creative Commons
https://www.flickr.
com/photos/75680924@N08/6830220892/in/photostream/
[Space needle] Space needle, Creative Commons Zero
https://unsplash.com/photos/-48aJfQpFCE
[Explosion] Vehicle and building on fire, Creative Commons Zero
https://unsplash.com/photos/28v9cq7ytNU
[Stonehenge] Stonehenge, Public Domain
[Cat] Firsalar the fluffy cat loves to sit in boxes, CC-BY 2.0
[Construction Kit] Free Universal Construction Kit by F.A.T. Lab and Sy-
Lab http://fffff.at/free-universal-construction-kit/
[MFS] While MFS was released to production for a short time, it was not
released for external customer use.
[Audience] Audience, Creative Commons Zero
https://unsplash.com/photos/bBQ9lhB-wpY
[Bird] Bird Stare, Creative Commons Zero
https://unsplash.com/photos/ig9lRTGT0h8
[Road] Yellow Brick road to Lost Hatch, CC-BY-NC 2.0 https://www.flickr.
com/photos/wvs/352414272
[Hands] Mud Hands, CC-BY-NC 2.0 https://www.flickr.
com/photos/migueltejadaflores/13783685515
[Docker-it-scala] Docker IT Scala, Whisk Labs, MIT https://github.
com/whisklabs/docker-it-scala