In recent years, we have seen an overwhelming number of TV commercials that promise that the Cloud can help with many problems, including some family issues. What stands behind the terms “Cloud” and “Cloud Computing,” and what we can actually expect from this phenomenon? A group of students of the Computer Systems Technology department and Dr. T. Malyuta, whom has been working with the Cloud technologies since its early days, will provide an overview of the business and technological aspects of the Cloud.
In recent years, we have seen an overwhelming number of TV commercials that promise that the Cloud can help with many problems, including some family issues. What stands behind the terms “Cloud” and “Cloud Computing,” and what we can actually expect from this phenomenon? A group of students of the Computer Systems Technology department and Dr. T. Malyuta, whom has been working with the Cloud technologies since its early days, will provide an overview of the business and technological aspects of the Cloud.
In recent years, we have seen an overwhelming number of TV commercials that promise that the Cloud can help with many problems, including some family issues. What stands behind the terms “Cloud” and “Cloud Computing,” and what we can actually expect from this phenomenon? A group of students of the Computer Systems Technology department and Dr. T. Malyuta, whom has been working with the Cloud technologies since its early days, will provide an overview of the business and technological aspects of the Cloud.
As part of NoSQL series, I presented Google Bigtable paper. In presentation I tried to give some plain introduction to Hadoop, MapReduce, HBase
www.scalability.rs
As part of NoSQL series, I presented Google Bigtable paper. In presentation I tried to give some plain introduction to Hadoop, MapReduce, HBase
www.scalability.rs
This Presentation is about NoSQL which means Not Only SQL. This presentation covers the aspects of using NoSQL for Big Data and the differences from RDBMS.
Quantitative Performance Evaluation of Cloud-Based MySQL (Relational) Vs. Mon...Darshan Gorasiya
To compare the performance of MySQL (Consistency & Availability - CA) with MongoDB (consistency & partition - CP). Yahoo! Cloud Serving Benchmark (YCSB) automated workloads used for quantitative comparison with large and small data volume.
اسلایدهای کارگاه پردازش های موازی با استفاده از زیرساخت جی پی یو GPU
اولین کارگاه ملی رایانش ابری کشور
وحید امیری
vahidamiry.ir
دانشگاه صنعتی امیرکبیر - 1391
The term "Data Lake" has become almost as overused and undescriptive as "Big Data". Many believe that centralizing datasets in HDFS makes a data lake, but then they struggle to realize any tangible value. This talk will redefine the "Data Lake" by describing four specific, key characteristics that we at Koverse have learned are crucial to successful enterprise data lake deployments. These characteristics are 1) indexing and search across all data sets, 2) interactive access for all users in the enterprise, 3) multi-level access control, and 4) integration with data science tools. These characteristics define a system that lets people realize value from their data versus getting lost in the hype. The talk will go on to provide a technical description of how we have integrated several projects, namely Apache Accumulo, Hadoop, and Spark, to implement an enterprise data lake with these key features.
As part of NoSQL series, I presented Google Bigtable paper. In presentation I tried to give some plain introduction to Hadoop, MapReduce, HBase
www.scalability.rs
As part of NoSQL series, I presented Google Bigtable paper. In presentation I tried to give some plain introduction to Hadoop, MapReduce, HBase
www.scalability.rs
This Presentation is about NoSQL which means Not Only SQL. This presentation covers the aspects of using NoSQL for Big Data and the differences from RDBMS.
Quantitative Performance Evaluation of Cloud-Based MySQL (Relational) Vs. Mon...Darshan Gorasiya
To compare the performance of MySQL (Consistency & Availability - CA) with MongoDB (consistency & partition - CP). Yahoo! Cloud Serving Benchmark (YCSB) automated workloads used for quantitative comparison with large and small data volume.
اسلایدهای کارگاه پردازش های موازی با استفاده از زیرساخت جی پی یو GPU
اولین کارگاه ملی رایانش ابری کشور
وحید امیری
vahidamiry.ir
دانشگاه صنعتی امیرکبیر - 1391
The term "Data Lake" has become almost as overused and undescriptive as "Big Data". Many believe that centralizing datasets in HDFS makes a data lake, but then they struggle to realize any tangible value. This talk will redefine the "Data Lake" by describing four specific, key characteristics that we at Koverse have learned are crucial to successful enterprise data lake deployments. These characteristics are 1) indexing and search across all data sets, 2) interactive access for all users in the enterprise, 3) multi-level access control, and 4) integration with data science tools. These characteristics define a system that lets people realize value from their data versus getting lost in the hype. The talk will go on to provide a technical description of how we have integrated several projects, namely Apache Accumulo, Hadoop, and Spark, to implement an enterprise data lake with these key features.
Презентация 1.9 - Испанский эко-дом, использующий энергию солнца и водыПавел Ефимов
Испанское архитектурное бюро Abaton реализовало проект автономного эко-дома на западе страны в области Эстремадура. Старая конюшня была преобразована в самодостаточный дом для большой семьи.
In November 2015, we will be launching Strategic Doing at the Sunshine Coast Futures Conference. One of the top three innovative regions in Australia, the Sunshine Coast includes civic leaders wiling to experiment with new approaches to getting things done. The University of the Sunshine Coast is partnering with Purdue University to move Strategic Doing to Australia.
Smart Mobility Policies with Evolutionary Algorithms: The Adapting Info Panel...Daniel H. Stolfi
In this article we propose the Yellow Swarm architecture for reducing travel times, greenhouse gas emissions and fuel consumption of road traffic by using several LED panels to suggest changes in the direction of vehicles (detours) for different time slots. These time intervals are calculated using an evolutionary algorithm, specifically designed for our proposal, which evaluates many working scenarios based on real cities, imported from OpenStreetMap into the SUMO traffic simulator. Our results show an improvement in average travel times, emissions, and fuel consumption even when only a small percentage of drivers follow the indications provided by our panels.
http://doi.acm.org/10.1145/2739480.2754742
Virtual versions of servers, applications, networks and storage can be created through virtualization. Its main types include operating system virtualization (VMs), hardware virtualization, application-server virtualization, storage virtualization, network virtualization, administrative virtualization and application virtualization.
Server virtualization is a technology for partitioning one physical server into multiple virtual servers. Each of these virtual servers can run its own operating system and applications, and perform as if it is an individual server. This makes it possible, for example, to complete development using various operating systems on one physical server or to consolidate servers used by multiple business divisions.
This work introduces faceted service discovery. It uses the Programmable Web directory as its corpus of APIs and enhances the search to enable faceted search, given an OWL ontology. The ontology describes semantic features of the APIs. We have designed the API classification ontology using LexOnt, a software we have built for semi-automatic ontology creation tool. LexOnt is geared toward non-experts within a service domain who want to create a high-level ontology that describes the domain. Using well- known NLP algorithms, LexOnt generates a list of top terms and phrases from the Programmable Web corpus to enable users to find high-level features that distinguish one Programmable Web service category from another. To also aid non-experts, LexOnt relies on outside sources such as Wikipedia and Wordnet to help the user identify the important terms within a service category. Using the ontology created from LexOnt, we have created APIBrowse, a faceted search interface for APIs. The ontology, in combination with the use of the Apache Solr search platform, is used to generate a faceted search interface for APIs based on their distinguishing features. With this ontology, an API is classified and displayed underneath multiple categories and displayed within the APIBrowse interface. APIBrowse gives programmers the ability to search for APIs based on their semantic features and keywords and presents them with a filtered and more accurate set of search results.
Knarig Arabshian is an Assistant Professor in the Computer Science Department at Hofstra University, since Fall 2014. Prior to that she was a Member of Technical Staff at Bell Labs in Murray Hill, NJ. She received her Ph.D. in Computer Science from Columbia University in 2008.
Professor Arabshian’s interests lie in the field of semantic web, service discovery and composition, context-aware computing and distributed systems. The goal of her research is to drive forward the idea of a personalized web. Her work explores ways of describing data meaningfully and designing frameworks and systems for efficient data discovery. During her tenure at Bell Labs, she worked on different aspects of ontology creation, distribution and querying.
The skeletal implementation pattern is a software design pattern consisting of defining an abstract class that provides a partial interface implementation. However, since Java allows only single class inheritance, if implementers decide to extend a skeletal implementation, they will not be allowed to extend any other class. Also, discovering the skeletal implementation may require a global analysis.
Java 8 enhanced interfaces alleviate these problems by allowing interfaces to contain (default) method implementations, which implementers inherit. Java classes are then free to extend a different class, and a separate abstract class is no longer needed; developers considering implementing an interface need only examine the interface itself.
In this talk, I will argue that both these benefits improve software modularity, and I will discuss our ongoing work in developing an automated refactoring tool that would assist developers in taking advantage of the enhanced interface feature for their legacy Java software.
Raffi Khatchadourian is an Assistant Professor in the Department of Computer Systems Technology (CST) at New York City College of Technology (NYCCT) of the City University of New York (CUNY) and an Open Educational Resources (OER) Fellow for the Spring 2016 semester. His research is centered on techniques for automated software evolution, particularly those related to automated refactoring and source code recommendation systems. His goal is to ease the burden associated with correctly and efficiently evolving large and complex software by providing automated tools that can be easily used by developers.
Raffi received his MS and PhD degrees in Computer Science from Ohio State University and his BS degree in Computer Science from Monmouth University in New Jersey. Prior to joining City Tech, he was a Software Engineer at Apple, Inc. in Cupertino, California, where he worked on Digital Rights Management (DRM) for iTunes, iBooks, and the App store. He also developed distributed software that tested various features of iPhones, iPads, and iPods.
Most tools that scientists use for the preparation of scholarly manuscripts, such as Microsoft Word and LaTeX, function offline and do not account for the born-digital nature of research objects. Also, most authoring tools in use today are not designed for collaboration, and, as scientific collaborations grow in size, research transparency and the attribution of scholarly credit are at stake. In this talk, I will show how the Authorea platform allows scientists to collaboratively write rich data-driven manuscripts on the web–articles that would natively offer readers a dynamic, interactive experience with an article’s full text, images, data, and code–paving the road to increased data sharing, data reuse, research reproducibility, and Open Science.
Alberto Pepe is the co-founder of Authorea. He recently finished a Postdoctorate in Astrophysics at Harvard University. During his postdoctorate, Alberto was also a fellow of the Berkman Center for Internet and Society and the Institute for Quantitative Social Science. Alberto is the author of 30 publications in the fields of Information Science, Data Science, Computational Social Science, and Astrophysics. He obtained his Ph.D. in Information Science from the University of California, Los Angeles with a dissertation on scientific collaboration networks which was awarded with the Best Dissertation Award by the American Society for Information Science and Technology (ASIS&T). Prior to starting his Ph.D., Alberto worked in the Information Technology Department of CERN, in Geneva, Switzerland, where he worked on data repository software and also promoted Open Access among particle physicists. Alberto holds a M.Sc. in Computer Science and a B.Sc. in Astrophysics, both from University College London, U.K. Alberto was born and raised in the wine-making town of Manduria, in Puglia, Southern Italy.
Cardiotoxicity is unfortunately a common side effect of many modern chemotherapeutic agents. The mechanisms that underlie these detrimental effects on heart muscle, however, remain unclear. The Drug Toxicity Signature Generation Center at ISMMS aims to address this unresolved issue by providing a bridge between molecular changes in cells and the prediction of pathophysiological effects. I will discuss ongoing work in which we use next-generation sequencing to quantify changes in gene expression that occur in cardiac myocytes after they are treated with potentially toxic chemotherapeutic agents. I will focus in particular on the computational pipeline we are developing that integrates sophisticated sequence alignment, statistical and network analysis, and dynamical mathematical models to develop novel predictions about the mechanisms underlying drug-induced cardiotoxicity.
Jaehee Shim is a Ph.D candidate in the Biophysics and Systems Pharmacology Program at Icahn School of Medicine at Mount Sinai (ISMMS). As a part of her Ph.D. studies, she is building dynamical prediction models based on analysis of gene expression data generated by the Drug Toxicity Signature Generation Center at ISMMS. She received her B.S in Biochemistry from the University of Michigan-Dearborn. Prior to starting her Ph.D, Jaehee worked at the ISMMS Genomics Core with a team of senior scientists and gained experience in improving and troubleshooting RNA sequencing protocols using Next Generation Sequencing Platforms.
Traditional approaches in anti-money laundering involve simple matching algorithms and a lot of human review. However, in recent years this approach has proven to not scale well with the ever increasingly strict regulatory environment. We at Bayard Rock have had much success at applying fancier approaches, including some machine learning, to this problem. In this talk I walk you through the general problem domain and talk about some of the algorithms we use. I’ll also dip into why and how we leverage typed functional programming for rapid iteration with a small team in order to out-innovate our competitors.
Bayard Rock, LLC, is a private research and software development company with headquarters in the Empire State Building. It is a leader in the filed in the research and development of tools for improving the state of the art in anti-money laundering and fraud detection. As you might imagine, these tools rely heavily on mathematics and graph algorithms. In this talk, Richard Minerich will discuss the research activities of Bayard Rock and its approaches to build tools to find the “bad guys”. Richard Minerich is Bayard Rock’s Director of Research and Development. Rick has expertise in F#, C#, C, C++, C++/CLI,. NET (1.1, 2.0, 3.0, 3.5, 4.0, and 4.5), Object Oriented Design, Functional Design, Entity Resolution, Machine Learning, Concurrency, and Image Processing. He is interested in working on algorithmically, mathematically complex projects and remains open to explore new ideas.
Rick holds 2 patents. The first one, co-invented with a colleague, is titled “Method of Image Analysis Using Sparse Hough Transform.” The other independently held is known as “Method for Document to Template Alignment.”
Recent years have seen the emergence of several static analysis techniques for reasoning about programs. This talk presents several major classes of techniques and tools that implement these techniques. Part of the presentation will be a demonstration of the tools.
Dr. Subash Shankar is an Associate Professor in the Computer Science department at Hunter College, CUNY. Prior to joining CUNY, he received a PhD from the University of Minnesota and was a postdoctoral fellow in the model checking group at Carnegie Mellon University. Dr. Shankar also has over 10 years of industrial experience, mostly in the areas of formal methods and tools for analyzing hardware and software systems.
With the proliferation of testing culture, many developers are facing new challenges. As projects are getting started, the focus may be on developing enough tests to maintain confidence that the code is correct. However, as developers write more and more tests, performance and repeatability become growing concerns for test suites. In our study of large open source software, we found that running tests took on average 41% of the total time needed to build each project – over 90% in those that took the longest to build. Unfortunately, typical techniques for accelerating test suites from literature (like running only a subset of tests, or running them in parallel) can’t be applied in practice safely, since tests may depend on each other. These dependencies are very hard to find and detect, posing a serious challenge to test and build acceleration. In this talk, I will present my recent research in automatically detecting and isolating these dependencies, enabling for significant, safe and sound build acceleration of up to 16x.
Big data is set to offer tremendous insight. But with terabytes and petabytes of data pouring in to organizations today, traditional architectures and infrastructures are not up to the challenge. This begs the question: How do you present big data in a way that can be quickly understood and used? These data present tremendous opportunities in data mining, a burgeoning field in computer science that focuses on the development of methods that can extract knowledge from data. In many real world problems, data mining algorithms have access to massive amounts of data. Mining all the available data is prohibitive due to computational (time and memory) constraints. Much of the current research is concerned with scaling up data mining algorithms (i.e. improving on existing data mining algorithms for larger datasets). An alternative approach is to scale down the data. Thus, determining a smallest sufficient training set size that obtains the same accuracy as the entire available dataset remains an important research question. Our research focuses on selecting how many (sampling) instances to present to the data mining algorithm and also how to improve the quality of the data.
Dr. Ashwin Satyanarayana is an Assistant Professor in the Computer Systems Technology department at CityTech. Prior to joining CityTech, Ashwin was a Research Scientist at Microsoft, where he worked on several Big Data problems including Query Reformulation on Microsoft's search engine Bing. Ashwin's prior experience also includes a Senior Research Scientist on the area of Location Analytics at Placed Inc. He holds a PhD in Computer Science (Data Mining) from SUNY, with particular emphasis on Data Mining, Machine Learning and Applied Probability with applications in Real World Learning Problems.
Java 8 is one of the largest upgrades to the popular language and framework in over a decade. This talk will detail several new key features of Java 8 that can help make programs easier to read, write, and maintain. Java 8 comes with many features, especially related to collection libraries. We will cover such new features as Lambda Expressions, the Stream API, enhanced interfaces, and more.
“Mobile is eating the world,” but few developers realize that mobile software is written very differently from desktop software. This leads to lots of mobile apps that simply don’t work well, suck up battery power, or can’t recover from being put into the background. I’ll discuss a few such apps on the Android platform, and explain how they should have been written to improve user experience, illustrating general mobile development principles by example.
Prosody is an essential component of human speech. Prosody, broadly, describes all of the production qualities of speech that are not involved in conveying lexical information. Where the words are “what is said”, prosody is “how it is said”. Prosody of speech, plays an important role not only in communicating the syntax, semantics and pragmatics of spoken language, but also in conveying information about the speaker and their internal state (e.g. emotion or fatigue).
Understanding prosody is critical to understanding speech communication. Spoken language processing (SLP) technology that approaches human levels of competence will necessarily include automatic analysis of prosody. Despite the importance of prosody in spoken communication, researchers are often unable to reliably incorporate prosodic information into applications. One explanation is a lack of compact, consistent, and universal representations of prosodic information. This talk will describe the state of the art in prosodic analysis and its use in spoken language processing with a focus on the development of new representations of prosody.
More from New York City College of Technology Computer Systems Technology Colloquium (11)
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered QualityInflectra
In this insightful webinar, Inflectra explores how artificial intelligence (AI) is transforming software development and testing. Discover how AI-powered tools are revolutionizing every stage of the software development lifecycle (SDLC), from design and prototyping to testing, deployment, and monitoring.
Learn about:
• The Future of Testing: How AI is shifting testing towards verification, analysis, and higher-level skills, while reducing repetitive tasks.
• Test Automation: How AI-powered test case generation, optimization, and self-healing tests are making testing more efficient and effective.
• Visual Testing: Explore the emerging capabilities of AI in visual testing and how it's set to revolutionize UI verification.
• Inflectra's AI Solutions: See demonstrations of Inflectra's cutting-edge AI tools like the ChatGPT plugin and Azure Open AI platform, designed to streamline your testing process.
Whether you're a developer, tester, or QA professional, this webinar will give you valuable insights into how AI is shaping the future of software delivery.
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Ramesh Iyer
In today's fast-changing business world, Companies that adapt and embrace new ideas often need help to keep up with the competition. However, fostering a culture of innovation takes much work. It takes vision, leadership and willingness to take risks in the right proportion. Sachin Dev Duggal, co-founder of Builder.ai, has perfected the art of this balance, creating a company culture where creativity and growth are nurtured at each stage.
2. What is a Virtual Machine?
A virtual machine (VM) is a software computer that, like a
physical computer, runs an operating system and
applications. Every virtual machine has virtual devices that
provide the same functionality as physical hardware and
have additional benefits in terms of portability,
manageability, and security.
3. What is Virtualization?
Virtualization means creation of a virtual version
of a device or resource, such as a server, storage
device, network, and software (including
operating system). This allows physical hardware
resources to be shared by multiple applications
(VMs).
Hypervisor: virtual machine manager (VMM)
A program that allows multiple operating systems to
share a single hardware host.
Example: VMware ESXi, Microsoft Hyper-V, and KVM
Non Virtual Machine and VM Configurations
4. Advantages of Virtualization
Reduce capital and operating costs
Deliver high application availability
Minimize or eliminate downtime.
Increase IT productivity, efficiency, agility and responsiveness
Speed and simplify application and resource provisioning
Support business continuity and disaster recovery
Enable centralized management
5. Types of Hypervisor
1. Type 1
Bare-metal Hypervisor: runs directly on
the system hardware.
Examples: VMware ESXi, Citrix XenServer.
2. Type 2
Hosted Hypervisor: runs on a host
operating system.
Examples: VMware Workstation, VMware
Fusion, Virtualbox, Microsoft Hyper-V.
Difference Between Type 1 and 2 Hypervisors
6. Server Virtualization – What is it?
Server virtualization is a virtualization technique that
presents a physical server as if partitioned into a number
of small, virtual servers with the help of virtualization
software. In server virtualization, each physical server runs
multiple operating system instances at the same time.
7. Traditional Storage
Traditional storage:
Directly attached to servers and cannot
be shared beyond the physical server.
Difficult for administrators to assign
storage requirements for each
application.
All applications running on physical server
are forced to use the same storage, with
same storage characteristics.
8. Solution to Traditional Storage - Storage Virtualization
Solution to traditional storage is storage
virtualization:
Adds a new layer of software and/or
hardware between storage systems
and servers.
A centralized storage enables servers
to share centralized resources, so that
applications no longer need to know
on which specific drives, partitions or
storage subsystems their data resides.
Storage virtualization is commonly
used in storage area networks (SANs).
9. Storage Area Network (SAN)
Storage Area Networks are the most comprehensive centralized
storage solution. It allows for true storage sharing since data is
stored at the block level. This means applications, including the OS
can directly access the storage device as if it was locally attached.
Each block can be controlled as an individual hard drive. These Blocks are
controlled by server based operating systems and each block can be
individually formatted with the required file system.
10. Network Virtualization – Standard vSwitch
VMware Network Virtualization provides
“virtual networks” to virtual machines
similar to how server virtualization
(hypervisor) provides “virtual machines” to
the operating system.
A network standard switch, virtual switch,
or vSwitch, is responsible for connecting
virtual machines to a virtual network. A
vSwitch works similar to a physical switch,
with some limitations, and controls how
virtual machines communicate with one
another.
vSphere Standard vSwitch
11. Network Virtualization – Distributed vSwitch
Distributed vSwitch, which are
also known as VMware vDS,
enable more features than
standard vSwitches, sometimes
called VMware vSS.
A standard vSwitch works within
one ESXi host only.
Distributed vSwitches allow
different hosts to use the switch
as long as they exist within the
same host cluster
vSphere Distributed vSwitch
12. Summary
A virtual machine (VM) is a software computer that, like a physical computer, runs an
operating system and applications.
Virtualization means creation of a virtual version of a device or resource, such as a
server, storage device, network, and software (including operating system).
Two types of Hypervisor: Bare-metal Hypervisor and Hosted Hypervisor.
Server virtualization is a virtualization technique that presents a physical server as if
partitioned into a number of small, virtual servers with the help of virtualization
software.
Solution to traditional storage is storage virtualization which adds a new layer of
software and/or hardware between storage systems and servers.
Network Virtualization provides “virtual networks” to virtual machines similar to how
server virtualization (hypervisor) provides “virtual machines” to the operating system.
13. Virtualization and the Cloud
The encapsulation offered in virtualization and the mobility
found in this technology enables a live virtual machine to be
moved with no downtime for the application – the
dependency on the Cloud infrastructure is minimal.
Virtualization powers cloud computing and increases IT
scalability, agility, flexibility and performance while creating a
major cost savings. With server, storage, and network
virtualization, cloud computing enables companies to react
faster to the needs of business, while driving greater
operational efficiencies.
14. References
"Virtualization." VMware. N.p., n.d. Web. 14 Oct. 2015. <http://www.vmware.com/virtualization.html>.
"CLOUD COMPUTING AND VIRTUALIZATION." JAN KREMER CONSULTING SERVICES. N.p., n.d. Web. 14 Oct. 2015.
<http://jkremer.com/White%20Papers/Cloud%20Computing%20and%20Virtualization%20White%20Paper%20JKCS.pdf>
.
Freeman, Bill. "Best Hardware for Server Virtualisation." TouchPoint. N.p., 17 Feb. 2015. Web. 20 Nov. 2015.
<http://touchpoint.com.au/blog/best-hardware-for-server-virtualisation/>.
Siebert, Eric. "Selecting CPU, processors and memory for virtualized environments." TechTarget. N.p., n.d. Web. 20 Nov.
2015. <http://searchservervirtualization.techtarget.com/tip/Selecting-CPU-processors-and-memory-for-virtualized-
environments>.
"Centralized Storage." Netcal. N.p., n.d. Web. 20 Nov. 2015. <http://www.netcal.com/centralized-storage/>.
Jorgenson, Petra. "Virtual Networking 101: Understanding VMware Networking." Pluralsight. N.p., 30 May 2012. Web. 20
Nov. 2015. <http://blog.pluralsight.com/virtual-networking-101-understanding-vmware-networking>.
Davis, David. "VMware's standard and distributed virtual switches: What resellers need to know." TechTarget. N.p., Feb.
2010. Web. 20 Nov. 2015. <http://searchitchannel.techtarget.com/tip/VMwares-standard-and-distributed-virtual-
switches-What-resellers-need-to-know>.
"How do switches, vSwitches and distributed vSwitches differ?" TechTarget. N.p., 11 June 2013. Web. 20 Nov. 2015.
<http://searchvmware.techtarget.com/photostory/2240185944/Getting-VMware-terminology-straight/9/How-do-
switches-vSwitches-and-distributed-vSwitches-differ>.
16. Server Virtualization – Software Requirements
Software Vendors: VMware vSphere, Microsoft Hyper-V, Citrix XenServer
More similarities between all of the software vendors than differences
All those software platforms have the ability to manage processor, memory, network, and disk
resources.
All support both Microsoft Windows and Linux operating environments, and some support Solaris
Unix as well
The only possible difference between software vendors are performance, reliability, and advanced
management.
VMware is obviously the leader in virtualization platform
17. Server Virtualization – Hardware Requirements
Choosing the best hardware for virtualization begins with a server’s memory and CPU. The
lack of memory or CPU can directly affect performance.
Memory:
Memory is often the most limiting factor in the number of virtual machines a server can host.
Ensuring an adequate amount of fast RAM plays a huge role in the server’s virtualization
capabilities.
CPU:
Selecting a CPU with multiple cores can significantly increase performance and throughput.
There are two major CPU brands in the market, Intel and AMD. Both Intel and AMD have
integrated virtualization extensions, Intel Virtualization Technology (Intel-VT) and AMD
Virtualization (AMD-V).
Choosing a right CPU brand depends on your current environment. If your current servers
already use a particular brand, it is a good idea to just stick with it because one VM running
on Intel can not be moved to AMD, and vice verse.
18. Network Virtualization - Standard vSwitch features
The standard vSwitch offers following features:
Layer 2 forwarding
802.1Q VLAN tagging
Multicast support
EtherChannel
Load balancing
Tx rate limiting
Port security
CDP
……..
19. Network Virtualization – Distributed vSwitch features
Unlike standard vSwitches, which can be managed from the local host, DvSwitches must be
created and controlled through vCenter Server.
VMware vCenter Server provides centralized management of vSphere virtual infrastructure.
IT administrators can ensure security and availability, simplify day-to-day tasks, and reduce
the complexity of managing virtual infrastructure).
Distributed vSwitch offers all features standard vSwitch have + following features:
Centralized configuration for all network switch ports, across the entire virtual infrastructure.
Private VLANs.
Support for third-party switches (with the only option today being the Cisco Nexus 1000-V).