Presented at Monitorama 2017, this talk discusses how to make humans more effective "monitors" in the complex sociotechnical systems in which they work.
Beyond the Retrospective: Embracing Complexity on the Road to Service OwnershipJ. Paul Reed
This document summarizes a presentation given by Kevin Finn-Braun of Intuit and J. Paul Reed at the DevOps Enterprise Summit 2016. The presentation discusses moving beyond traditional retrospective approaches to embrace complexity and service ownership. It outlines different levels of experience with incident analysis, from novice to expert, identifying behaviors and approaches associated with each level. These include how incidents are discussed, the focus of retrospectives, and how outcomes are applied. The document also introduces the incident lifecycle model of detection, response, remediation and prevention.
Gain Maximum Visibility into Your Applications - DEM03 - Chicago AWS SummitAmazon Web Services
Visibility into your applications and systems is critical to guarding against errors, maintaining uptime, and protecting performance. In this session, we show how DevOps enables us to build better systems by leveraging the perspectives of different teams in order to gain that visibility. This session is brought to you by AWS partner, Datadog.
Visibility into your applications and systems is critical in guarding against errors, maintaining uptime, and protecting performance. In this session, learn how DevOps enables us to build better systems by leveraging the perspectives of different teams in order to gain that visibility.
100% Visibility - Jason Yee - Codemotion Amsterdam 2018Codemotion
Monitoring systems has traditionally been the responsibility of Ops teams. But our goal is to align devs, ops, & other roles in the organization (aka DevOps), so we need to ensure they are all monitoring critical business systems & do so in ways that take advantage of the unique perspective that each role offers. In this session, I’ll break down the expansive monitoring landscape into 5 categories that each provide a unique view of your systems. I’ll show how each category allows your team to have complete observability, avoid blind spots, & work together to quickly resolve issues & outages.
SearchLove London | Kelvin Newman, 'What the Flash Crash and Black Boxes can ...Distilled
May 6th, 2010 the Dow Jones Industrial Average plunged about 1000 points only to recover those losses within minutes – this was the Flash Crash. No catastrophes or physical events caused this swing, it was the black boxes of stock market algorithms. Black boxes a lot like Google’s. How do we prepare for the future when even Google doesn’t know how its algorithm works?D
Mining Events from Multimedia Streams (WAIS Research group seminar June 2014)Jonathon Hare
The document discusses mining meaningful events and trends from multimedia streams on social media. It describes challenges including dealing with massive amounts of data and making effective use of different modalities. It then presents two case studies: monitoring Twitter's visual pulse by detecting trending images, and detecting social events from a Flickr image collection using features like time, location, text and image similarities between photos. Clustering algorithms are used to group related photos into events. Feature weighting is also explored to determine the most important features for separating events.
The document outlines 15 steps to become a Martian, including deciding to become an astronaut, meeting education and experience requirements, gaining relevant skills like fixing things and working as a team, and completing analog missions and survival training. It emphasizes the challenges of being selected for a Mars mission, as astronauts have a <0.1% chance and it may take 10+ years, and suggests embracing the long process through fitness, education, experience and a sense of humor.
SEWM'14 keynote: Mining Events from Multimedia StreamsJonathon Hare
Keynote at the ICMR 2014 Workshop on Social Events in Web Multimedia (SEWM). Glasgow, UK. 1st April 2014.
The aggregation of items from social media streams, such as Flickr photos and Twitter tweets, into meaningful groups can help users contextualise and effectively consume the torrents of information on the social web. This task is challenging due to the scale of the streams and the inherently multimodal nature of the information being contextualised.
In this talk we’ll describe some of our recent work on trend and event detection in multimedia data streams. We focus on scalable streaming algorithms that can be applied to multimedia data streams from the web and the social web. The talk will cover two particular aspects of our work: mining Twitter for trending images by detecting near duplicates; and detecting social events in multimedia data with streaming clustering algorithms. We will describe in detail our techniques, and explore open questions and areas of potential future work, in both these tasks.
Beyond the Retrospective: Embracing Complexity on the Road to Service OwnershipJ. Paul Reed
This document summarizes a presentation given by Kevin Finn-Braun of Intuit and J. Paul Reed at the DevOps Enterprise Summit 2016. The presentation discusses moving beyond traditional retrospective approaches to embrace complexity and service ownership. It outlines different levels of experience with incident analysis, from novice to expert, identifying behaviors and approaches associated with each level. These include how incidents are discussed, the focus of retrospectives, and how outcomes are applied. The document also introduces the incident lifecycle model of detection, response, remediation and prevention.
Gain Maximum Visibility into Your Applications - DEM03 - Chicago AWS SummitAmazon Web Services
Visibility into your applications and systems is critical to guarding against errors, maintaining uptime, and protecting performance. In this session, we show how DevOps enables us to build better systems by leveraging the perspectives of different teams in order to gain that visibility. This session is brought to you by AWS partner, Datadog.
Visibility into your applications and systems is critical in guarding against errors, maintaining uptime, and protecting performance. In this session, learn how DevOps enables us to build better systems by leveraging the perspectives of different teams in order to gain that visibility.
100% Visibility - Jason Yee - Codemotion Amsterdam 2018Codemotion
Monitoring systems has traditionally been the responsibility of Ops teams. But our goal is to align devs, ops, & other roles in the organization (aka DevOps), so we need to ensure they are all monitoring critical business systems & do so in ways that take advantage of the unique perspective that each role offers. In this session, I’ll break down the expansive monitoring landscape into 5 categories that each provide a unique view of your systems. I’ll show how each category allows your team to have complete observability, avoid blind spots, & work together to quickly resolve issues & outages.
SearchLove London | Kelvin Newman, 'What the Flash Crash and Black Boxes can ...Distilled
May 6th, 2010 the Dow Jones Industrial Average plunged about 1000 points only to recover those losses within minutes – this was the Flash Crash. No catastrophes or physical events caused this swing, it was the black boxes of stock market algorithms. Black boxes a lot like Google’s. How do we prepare for the future when even Google doesn’t know how its algorithm works?D
Mining Events from Multimedia Streams (WAIS Research group seminar June 2014)Jonathon Hare
The document discusses mining meaningful events and trends from multimedia streams on social media. It describes challenges including dealing with massive amounts of data and making effective use of different modalities. It then presents two case studies: monitoring Twitter's visual pulse by detecting trending images, and detecting social events from a Flickr image collection using features like time, location, text and image similarities between photos. Clustering algorithms are used to group related photos into events. Feature weighting is also explored to determine the most important features for separating events.
The document outlines 15 steps to become a Martian, including deciding to become an astronaut, meeting education and experience requirements, gaining relevant skills like fixing things and working as a team, and completing analog missions and survival training. It emphasizes the challenges of being selected for a Mars mission, as astronauts have a <0.1% chance and it may take 10+ years, and suggests embracing the long process through fitness, education, experience and a sense of humor.
SEWM'14 keynote: Mining Events from Multimedia StreamsJonathon Hare
Keynote at the ICMR 2014 Workshop on Social Events in Web Multimedia (SEWM). Glasgow, UK. 1st April 2014.
The aggregation of items from social media streams, such as Flickr photos and Twitter tweets, into meaningful groups can help users contextualise and effectively consume the torrents of information on the social web. This task is challenging due to the scale of the streams and the inherently multimodal nature of the information being contextualised.
In this talk we’ll describe some of our recent work on trend and event detection in multimedia data streams. We focus on scalable streaming algorithms that can be applied to multimedia data streams from the web and the social web. The talk will cover two particular aspects of our work: mining Twitter for trending images by detecting near duplicates; and detecting social events in multimedia data with streaming clustering algorithms. We will describe in detail our techniques, and explore open questions and areas of potential future work, in both these tasks.
Business Development for Startup Founders (DevCon Cebu 2018 keynote)Alistair Israel
My brief keynote at the first ever DevCon Summit Cebu in 2018 where I share a few experiences and insights having started, worked at, and even exited startups my entire career, coming from a developer's perspective, for developers.
This document discusses emerging technologies and their potential impacts. It covers topics like artificial intelligence, quantum computing, robotics, cyborgs, smart materials, fusion power, artificial life, malware, biobots, network bots, and more. The document notes that many of these technologies are still in early experimental stages and face challenges before being ready for widespread use. It also discusses debates around AI safety and the relationship between humans and increasingly intelligent machines.
Seventy years on from AI appearing on the public scene and all the optimistic projections have been largely overtaken with systems outgunning humans at all board, card and computer games including Chess, Poker and GO. Of course; general knowledge, medical diagnosis, genetics and proteomics, image and pattern recognition are now all firmly in the grasp of AI.
Interestingly, AI is treading a similar path to computing in that it began with single purpose/task machines that could only deal with a company payroll calculations or banking transactions and nothing more! General purpose computing emerged over further decades to give us the PCs and devices we now enjoy. So, AI currently runs as task specific applications on these general purpose platforms, and no doubt, general purpose AI will also become tractable in a few decades too!
Recent progress has promoted a deal of debate and discussion along with hundreds of published papers and definitions that attempt to characterise biological and artificial intelligence. But they all suffer the same futility and fail! Without reference to any formal characterisation, all discussion and debate remains relatively meaningless.
Somewhat ironically, it was the defence industry that triggered the analysis work here. Two of key steps to success were: the abandonment of all performance comparisons between biological and machine entities; and the avoidance of using the human brain as some ‘golden’ intelligence reference.
This presentation is suitable for professionals and public alike, and comes fully illustrated by high quality graphics, animations and movies. Inevitably, it contains (engineering) mathematics that non-practitioners will have to take on trust, whilst professionals may wish challenge on the basis that the focus on getting a solution rather than the purity of the process!
It has been estimated that the global earnings of Cyber Criminals will equal or exceed the GDP of the UK sometime in the 2022/23 window. If this was the capability of a country they would be joining the G8! Clearly, we are losing the Cyber War hands down, and the time has long passed when we might ignore the threat scenarios surrounding us.
In this lecture we examine global networks from home and office through the ‘last mile,’ and on to national and international networks to identify the key vulnerabilities and points of potential ingress. We identify the cyber risks as escalating as we approach the periphery of all forms of network. For the most part, the core/carrier networks are virtually unassailable physically as they are dominated by terrestrial and undersea optical fibre cables.
Throughout the ‘carrier’ network levels the difficulty of physical interception, encryption, routing, and path diversity employed renders them secure in the extreme. Attackers, therefore, tend to focus on the exploitation of people, devices, services, home, and office appliances, and latterly, a poorly engineered IoT.
In reality, we are expanding the attack surface of the planet exponentially without due caution or care in the most exposed sectors and locations. And so, we explore potential tech and operational solutions for the future.
NOTE: This lecture is one of a series that has examined technology design and deployment, devices and the IoT, people fallibility, deviousness, internal and external threats.
In class; RED and BLUE Team Exercises have also been conducted in support of the complete Cyber Security Package to date.
Technology Trends, Consumer Experience @MICA 2016Ravi Pal
Technology trends and consumer experience, how to build for new age experience? how do we understand experience and its architecture? what are the possible candidates to attack to build an impact using technology.
This document provides instructions and guidance for Assessment 1 of a graphic design course. It outlines the requirements for the assessment, which includes 5 parts: A) researching the student's chosen industry; B) addressing legal issues relevant to graphic design practice; C) researching trends; D) applying trends to designs; and E) creating an industry trends portfolio. It provides tips on file naming, submission instructions, and describes the elements and critical aspects that will be assessed. These include developing an industry focus, understanding legal requirements, researching and evaluating trends, developing skills to meet trends, and responding to changing trends and technologies.
The document discusses tools, methods, and techniques for experience innovation and design. It covers topics like experience architecture, prototyping, storytelling, design thinking, and building minimum viable products. Tips provided emphasize challenging conventions, anticipating future needs, and designing with empathy and meaning to create natural experiences for humans.
Rp2-2015-technology trends enriching consumer experienceRavi Pal
The document discusses how different people experienced the 2015 Nepal earthquake through various media, content, stories, data, social apps, and technology. It also addresses the relationships between humans and machines and different ways of experiencing a story through application of technology in context. The document advocates for designing experiences for humans that are natural and empathetic.
Data Science Popup Austin: Conflict in Growing Data Science Organizations Domino Data Lab
Watch talk ➟ http://bit.ly/1NKPpQh
Eduardo Arino De La Rubia, VP of Product and Data Scientist in residence at Domino Data Lab talks about how to manage conflict in growing data science teams.
Rp2-2015 - technology driven macro trends in marketing space Ravi Pal
The document discusses trends in marketing technology, including the increasing capabilities of machines and how they are becoming more human-like. It also touches on the future of storytelling using virtual reality. Several topics are listed relating to the future of brands, including more innovation but less control, as well as segmentation, startups, and content. The future of marketing agencies is discussed as moving beyond traditional "mad men" styles to utilizing diverse talent and playing more of an educational role. The importance of human experiences, design, empathy, data, analytics, storytelling and solutions is emphasized.
The document discusses five common workplace legal pitfalls and provides strategies to avoid them. It addresses issues related to employee classification, health and safety litigation, equal employment opportunity laws, social media use, and limiting supervisor liability. For each pitfall, it provides tips such as carefully auditing employee classifications, establishing clear expectations and accountability, asking consistency questions during EEO investigations, defining appropriate social media use policies, and conducting harassment training for supervisors.
We are living through an extraordinary pandemic (CV-19) that has changed all the network norms including the way we work and communicate. An invisible consequence has been the transformation of internet and telecoms traffic promoted by people working from home, restrictions on all travel and a paralysis of almost all social norms. Living and working in isolation for 3 - 5 months has become the new mode for many, and even the most technophobic have had to turned to video conferencing and on-line purchases to ‘survive’
From a network point of view the transition has seen the concentrations of traffic in major cities and towns mutate to the dispersed and disparate working, social and entertainment activities that have found the last mile wanting. Insufficient bandwidth connectivity and resilience have quickly become a prime concern with the overloading of core networks a lesser concern.
Installing new optical links and making the core (undersea and overland long-lines) networks more robust is relatively easy as they are by far the most resilient and secure of our infrastructures. It is the local loop, our last mile, that poses the hard to fix problem. In this session we present tested model solutions based on direct ‘dark-fibre’ to home and office with no electronics, splitters or access points in the field. This is augmented by Mesh-Nets and 4/5G providing temporary bridges for random fibre breaks and cable damage.
What does mapping controversies mean and why controversies are charted and vi...Khalid Md Saifuddin
The document discusses controversies and contested issues, and why mapping controversies is useful. It notes that controversies are complex and involve disagreements not just over opinions but also over core questions, relevant experts, and conditions for trusting expertise. While controversies may not be easily settled due to these disagreements, mapping them can provide a fruitful way to learn about science, technology and society. The document introduces controversy mapping as a teaching method and research approach, outlining the phases of a mapping project from understanding concepts to interpreting and presenting maps. It also discusses some key points about controversies from perspectives like actor-network theory.
We are engaged in a war the like of which we have never seen or experienced before. Our enemies are invisible and relentless; with globally dispersed forces working at all levels and in all sectors of our societies. They are better organised, resourced, motivated, and adaptive than any of our organisations or institutions, and they are winning. This war is also one of paradox!
“The cost to many nations is now on a par with their GDP”
“No previous war has seen so many suffer so much to (almost) never retaliate”
“We are up against attackers who operate as a virtual (ghost-like) guerrilla army”
“No state can defend its population and organisations, and they stand alone - isolated and exposed”
“A real army/defence force would rehearse and play all day and very occasionally engage in warfare. We, on the other hand, are at war every day but never play, war-game, or anticipate new forms of attack”
To turn this situation around we need to understand our enemies and adopt their tactics and tools as a part of our defence strategy. We also have to be united, and organised so the no one, and no organisation, stands alone. We also have to engage in sharing attack data, experiences and solutions.
All this has to be supported by wargaming, and anticipatory solutions creation.
The good news is; we have better, and more, people, machines, networks, facilities, and expertise than our enemies. All it requires is the embracing of advanced R&D, leadership, sharing, and orchestration on a global scale.
Data Science Popup Austin: Privilege and Supervised Machine LearningDomino Data Lab
Watch talk ⇒ http://bit.ly/1SGuwNs
I'll use the example of sentiment analysis to show that supervised machine learning has the potential to amplify the voices of the most privileged people in society. A sentiment analysis algorithm is considered ‘table stakes’ for any serious text analytics platform in social media, finance, or security. As an example of supervised machine learning, I'll show how these systems are trained. But I'll also show that they have the unavoidable property that they are better at spotting unsubtle expressions of extreme emotion. Such crude expressions are used by a particularly privileged group of authors: men. In this way, brands that depend on sentiment analysis to 'learn what people think' inevitably pay more attention to men. The problem doesn't stop with sentiment analysis: at every step of any model building process, we make choices that can introduce bias, enhance privilege, or break the law! I'll review these pitfalls, talk about how you can recognize them in your own work, and touch on some new academic work that aims to mitigate these harms.
Critical thinking is the kind of thinking that specifically looks for problems and mistakes. Regular people don't do a lot of it. However, if you want to be a great tester, you need to be a great critical thinker, too. Critically thinking testers save projects from dangerous assumptions and ultimately from disasters. The good news is that critical thinking is not just innate intelligence or a talent—it's a learnable and improvable skill you can master. Michael Bolton shares the specific techniques and heuristics of critical thinking and presents realistic testing puzzles that help you practice and increase your thinking skills. Critical thinking begins with just three questions—Huh? Really? and So?—that kick start your brain to analyze specifications, risks, causes, effects, project plans, and anything else that puzzles you. Join Michael for this interactive, hands-on session and practice your critical thinking skills. Study and analyze product behaviors and experience new ways to identify, isolate, and characterize bugs.
In this lecture is the final session of an extensive wireless course delivered over several weeks at the University of Suffolk. So, by way of ‘rounding-off’ the series, we chart the progression of wireless/radio communication from the first spark transmitters through Carrier -Wave Morse, AM, FM, DSSC, SSB to digital systems along with the use of LW, MW, SW, VHF, UHF and Microwaves. Whilst we focus on Electro-Magnetic-Waves from 30kHz through 300GHz, we also mention optical, ultrasonic, and chemical communication as additional modes.
Our examinations detail the distinct genetic trails of 1, 2, 3G, and 4, 5G, the approximate development cycles/timeline along with distinctive changes in design thinking. We then postulate that 6 and 7G are likely to form a new line of development with 6G probably realised without any towers or any conventional cellular structure. In this context we also point out that there are no digital radios today, only traditional analogue designs with ‘strap-on-modems’ at the transmitter and receiver. Perhaps more radically, we suggest that it is time to adapt fully digital designs that allow for the eradication of the established bands and channels mode of operation.
We also chart the energy hungry progression of systems from 1 through 5G where tower installations are now consuming in excess of 10kW due to the extensive signal processing employed. This immediately debunks any notion of another step in the direction of more bandwidth, lower latency, greater coverage with >20x more towers (than 4G) and >250Bn power hungry smart devices. In short: we propose that 5G is the last of the line and the realisation of 6G demands new thinking and new modes that lead us away from W and mW to µW and nW wireless designs.
Whilst most of the technology required for 6G is available up to 300GHz, there remains one big channel in respect of the growing number of antennas per device and platform. Even for 3 - 5G + WiFi + BlueTooth space is at a premium in mobile devices and fractal antennas have not lived up to their promise too integrate all of these into one wideband structure. However, at 100GHz and above, antennas/dipoles become less than chip size and can see 10s included as phased arrays. But this all needs further work!
Throughout this lecture, we provide examples, demonstrations, and mind-experiments to support our assertions.
Every Industrial revolution has seen the progression from people dominated design, build and production to a higher degrees of automation that has gone hand-in-hand with shortening timescales enabled by ever-more powerful technologies. However, at a fundamental level the process has remained the same, but it is now edging toward a continuum of evolution as opposed to a series of discrete jumps that often trigger company reorganizations. In concert, there is a realization abroad that it is no longer about the biggest, the strongest, the best, or the fittest, it is now all about the survival of the most adaptable.
By and large it is relatively easy to predict when and where tech change will occur and the likely outcomes, in terms of existing and future products and services, but how people, customers, companies and societies will react is an unsolved puzzle. On another plane, competition and threats may well occur outside the sector, from a direction managers are not looking, by entirely new mechanisms, and at a most critical time. These are all challenges indeed!
How to adapt to, and cope with these collective challenges is the focus of this presentation which is illustrated and supported by past and present industrial cases along with the experiences and methodologies of those who have driven/weathered this storm as well as those who failed. Many of the illustrations are automated and there are exemplar movies and segue inserts throughout.
Industries 1.0, 2.0 (and most of) 3.0, saw manufacturing and construction using natural materials readily extracted, refined, amalgamated, machined, and molded. In general, these exhibited fixed mechanical, electrical, and chemical properties. However, the latter stages of Industry 3.0 embraced synthetics exhibiting superior properties to afford new degrees of freedom in the design of structures and products.
Today Industry 4.0 sees further advances with metamaterials, dynamic coatings, controllable properties, and additive manufacturing. Embedded smarts have also made communication between components, products and structures possible under the guise of the IoT. Adaptable materials with a degree of self-repair are also opening the door to further freedoms and less material use. In combination, these represent a big step toward sustainable societies with highly efficient ReUse, RePurposing, and Recycling (3R).
At the leading edge, we are now realising active surfaces that can reflect, absorb, or amplify wireless signals, offer programmable colour, and integral energy storage. But amongst a growing list of possibilities, it is integral sensing & communication that may define this new era. In this presentation, we look at these advances in the context of smart design, cities & societies.
Research involves solving problems through established methods such as consulting others, using the internet, or conducting a literature review. It allows people to establish the truth using evidence and has improved life through scientific advances like antibiotics. Research provides the basis for practices in fields like medicine and is important for continuous development, assessing effectiveness, solving issues, informing decisions, evaluating teaching methods, and advancing qualifications.
The Relationship Between Body Image And The MediaJessica Myers
Here are the key points I would highlight in an essay on this topic:
- The genetic revolution, specifically cloning, raised significant ethical concerns about interfering with nature and the sanctity of life. It challenged long-held views about what constitutes a "natural" birth.
- Cloning blurs the lines between animal and human life, raising questions about where to draw the line with genetic experimentation. Some fear a "slippery slope" towards human cloning if not regulated.
- There are also concerns that cloning could be used for eugenics or genetic enhancement of humans, allowing some to have "designer babies" with chosen traits while others do not. This raises issues of equality, ethics, and playing God.
Business Development for Startup Founders (DevCon Cebu 2018 keynote)Alistair Israel
My brief keynote at the first ever DevCon Summit Cebu in 2018 where I share a few experiences and insights having started, worked at, and even exited startups my entire career, coming from a developer's perspective, for developers.
This document discusses emerging technologies and their potential impacts. It covers topics like artificial intelligence, quantum computing, robotics, cyborgs, smart materials, fusion power, artificial life, malware, biobots, network bots, and more. The document notes that many of these technologies are still in early experimental stages and face challenges before being ready for widespread use. It also discusses debates around AI safety and the relationship between humans and increasingly intelligent machines.
Seventy years on from AI appearing on the public scene and all the optimistic projections have been largely overtaken with systems outgunning humans at all board, card and computer games including Chess, Poker and GO. Of course; general knowledge, medical diagnosis, genetics and proteomics, image and pattern recognition are now all firmly in the grasp of AI.
Interestingly, AI is treading a similar path to computing in that it began with single purpose/task machines that could only deal with a company payroll calculations or banking transactions and nothing more! General purpose computing emerged over further decades to give us the PCs and devices we now enjoy. So, AI currently runs as task specific applications on these general purpose platforms, and no doubt, general purpose AI will also become tractable in a few decades too!
Recent progress has promoted a deal of debate and discussion along with hundreds of published papers and definitions that attempt to characterise biological and artificial intelligence. But they all suffer the same futility and fail! Without reference to any formal characterisation, all discussion and debate remains relatively meaningless.
Somewhat ironically, it was the defence industry that triggered the analysis work here. Two of key steps to success were: the abandonment of all performance comparisons between biological and machine entities; and the avoidance of using the human brain as some ‘golden’ intelligence reference.
This presentation is suitable for professionals and public alike, and comes fully illustrated by high quality graphics, animations and movies. Inevitably, it contains (engineering) mathematics that non-practitioners will have to take on trust, whilst professionals may wish challenge on the basis that the focus on getting a solution rather than the purity of the process!
It has been estimated that the global earnings of Cyber Criminals will equal or exceed the GDP of the UK sometime in the 2022/23 window. If this was the capability of a country they would be joining the G8! Clearly, we are losing the Cyber War hands down, and the time has long passed when we might ignore the threat scenarios surrounding us.
In this lecture we examine global networks from home and office through the ‘last mile,’ and on to national and international networks to identify the key vulnerabilities and points of potential ingress. We identify the cyber risks as escalating as we approach the periphery of all forms of network. For the most part, the core/carrier networks are virtually unassailable physically as they are dominated by terrestrial and undersea optical fibre cables.
Throughout the ‘carrier’ network levels the difficulty of physical interception, encryption, routing, and path diversity employed renders them secure in the extreme. Attackers, therefore, tend to focus on the exploitation of people, devices, services, home, and office appliances, and latterly, a poorly engineered IoT.
In reality, we are expanding the attack surface of the planet exponentially without due caution or care in the most exposed sectors and locations. And so, we explore potential tech and operational solutions for the future.
NOTE: This lecture is one of a series that has examined technology design and deployment, devices and the IoT, people fallibility, deviousness, internal and external threats.
In class; RED and BLUE Team Exercises have also been conducted in support of the complete Cyber Security Package to date.
Technology Trends, Consumer Experience @MICA 2016Ravi Pal
Technology trends and consumer experience, how to build for new age experience? how do we understand experience and its architecture? what are the possible candidates to attack to build an impact using technology.
This document provides instructions and guidance for Assessment 1 of a graphic design course. It outlines the requirements for the assessment, which includes 5 parts: A) researching the student's chosen industry; B) addressing legal issues relevant to graphic design practice; C) researching trends; D) applying trends to designs; and E) creating an industry trends portfolio. It provides tips on file naming, submission instructions, and describes the elements and critical aspects that will be assessed. These include developing an industry focus, understanding legal requirements, researching and evaluating trends, developing skills to meet trends, and responding to changing trends and technologies.
The document discusses tools, methods, and techniques for experience innovation and design. It covers topics like experience architecture, prototyping, storytelling, design thinking, and building minimum viable products. Tips provided emphasize challenging conventions, anticipating future needs, and designing with empathy and meaning to create natural experiences for humans.
Rp2-2015-technology trends enriching consumer experienceRavi Pal
The document discusses how different people experienced the 2015 Nepal earthquake through various media, content, stories, data, social apps, and technology. It also addresses the relationships between humans and machines and different ways of experiencing a story through application of technology in context. The document advocates for designing experiences for humans that are natural and empathetic.
Data Science Popup Austin: Conflict in Growing Data Science Organizations Domino Data Lab
Watch talk ➟ http://bit.ly/1NKPpQh
Eduardo Arino De La Rubia, VP of Product and Data Scientist in residence at Domino Data Lab talks about how to manage conflict in growing data science teams.
Rp2-2015 - technology driven macro trends in marketing space Ravi Pal
The document discusses trends in marketing technology, including the increasing capabilities of machines and how they are becoming more human-like. It also touches on the future of storytelling using virtual reality. Several topics are listed relating to the future of brands, including more innovation but less control, as well as segmentation, startups, and content. The future of marketing agencies is discussed as moving beyond traditional "mad men" styles to utilizing diverse talent and playing more of an educational role. The importance of human experiences, design, empathy, data, analytics, storytelling and solutions is emphasized.
The document discusses five common workplace legal pitfalls and provides strategies to avoid them. It addresses issues related to employee classification, health and safety litigation, equal employment opportunity laws, social media use, and limiting supervisor liability. For each pitfall, it provides tips such as carefully auditing employee classifications, establishing clear expectations and accountability, asking consistency questions during EEO investigations, defining appropriate social media use policies, and conducting harassment training for supervisors.
We are living through an extraordinary pandemic (CV-19) that has changed all the network norms including the way we work and communicate. An invisible consequence has been the transformation of internet and telecoms traffic promoted by people working from home, restrictions on all travel and a paralysis of almost all social norms. Living and working in isolation for 3 - 5 months has become the new mode for many, and even the most technophobic have had to turned to video conferencing and on-line purchases to ‘survive’
From a network point of view the transition has seen the concentrations of traffic in major cities and towns mutate to the dispersed and disparate working, social and entertainment activities that have found the last mile wanting. Insufficient bandwidth connectivity and resilience have quickly become a prime concern with the overloading of core networks a lesser concern.
Installing new optical links and making the core (undersea and overland long-lines) networks more robust is relatively easy as they are by far the most resilient and secure of our infrastructures. It is the local loop, our last mile, that poses the hard to fix problem. In this session we present tested model solutions based on direct ‘dark-fibre’ to home and office with no electronics, splitters or access points in the field. This is augmented by Mesh-Nets and 4/5G providing temporary bridges for random fibre breaks and cable damage.
What does mapping controversies mean and why controversies are charted and vi...Khalid Md Saifuddin
The document discusses controversies and contested issues, and why mapping controversies is useful. It notes that controversies are complex and involve disagreements not just over opinions but also over core questions, relevant experts, and conditions for trusting expertise. While controversies may not be easily settled due to these disagreements, mapping them can provide a fruitful way to learn about science, technology and society. The document introduces controversy mapping as a teaching method and research approach, outlining the phases of a mapping project from understanding concepts to interpreting and presenting maps. It also discusses some key points about controversies from perspectives like actor-network theory.
We are engaged in a war the like of which we have never seen or experienced before. Our enemies are invisible and relentless; with globally dispersed forces working at all levels and in all sectors of our societies. They are better organised, resourced, motivated, and adaptive than any of our organisations or institutions, and they are winning. This war is also one of paradox!
“The cost to many nations is now on a par with their GDP”
“No previous war has seen so many suffer so much to (almost) never retaliate”
“We are up against attackers who operate as a virtual (ghost-like) guerrilla army”
“No state can defend its population and organisations, and they stand alone - isolated and exposed”
“A real army/defence force would rehearse and play all day and very occasionally engage in warfare. We, on the other hand, are at war every day but never play, war-game, or anticipate new forms of attack”
To turn this situation around we need to understand our enemies and adopt their tactics and tools as a part of our defence strategy. We also have to be united, and organised so the no one, and no organisation, stands alone. We also have to engage in sharing attack data, experiences and solutions.
All this has to be supported by wargaming, and anticipatory solutions creation.
The good news is; we have better, and more, people, machines, networks, facilities, and expertise than our enemies. All it requires is the embracing of advanced R&D, leadership, sharing, and orchestration on a global scale.
Data Science Popup Austin: Privilege and Supervised Machine LearningDomino Data Lab
Watch talk ⇒ http://bit.ly/1SGuwNs
I'll use the example of sentiment analysis to show that supervised machine learning has the potential to amplify the voices of the most privileged people in society. A sentiment analysis algorithm is considered ‘table stakes’ for any serious text analytics platform in social media, finance, or security. As an example of supervised machine learning, I'll show how these systems are trained. But I'll also show that they have the unavoidable property that they are better at spotting unsubtle expressions of extreme emotion. Such crude expressions are used by a particularly privileged group of authors: men. In this way, brands that depend on sentiment analysis to 'learn what people think' inevitably pay more attention to men. The problem doesn't stop with sentiment analysis: at every step of any model building process, we make choices that can introduce bias, enhance privilege, or break the law! I'll review these pitfalls, talk about how you can recognize them in your own work, and touch on some new academic work that aims to mitigate these harms.
Critical thinking is the kind of thinking that specifically looks for problems and mistakes. Regular people don't do a lot of it. However, if you want to be a great tester, you need to be a great critical thinker, too. Critically thinking testers save projects from dangerous assumptions and ultimately from disasters. The good news is that critical thinking is not just innate intelligence or a talent—it's a learnable and improvable skill you can master. Michael Bolton shares the specific techniques and heuristics of critical thinking and presents realistic testing puzzles that help you practice and increase your thinking skills. Critical thinking begins with just three questions—Huh? Really? and So?—that kick start your brain to analyze specifications, risks, causes, effects, project plans, and anything else that puzzles you. Join Michael for this interactive, hands-on session and practice your critical thinking skills. Study and analyze product behaviors and experience new ways to identify, isolate, and characterize bugs.
In this lecture is the final session of an extensive wireless course delivered over several weeks at the University of Suffolk. So, by way of ‘rounding-off’ the series, we chart the progression of wireless/radio communication from the first spark transmitters through Carrier -Wave Morse, AM, FM, DSSC, SSB to digital systems along with the use of LW, MW, SW, VHF, UHF and Microwaves. Whilst we focus on Electro-Magnetic-Waves from 30kHz through 300GHz, we also mention optical, ultrasonic, and chemical communication as additional modes.
Our examinations detail the distinct genetic trails of 1, 2, 3G, and 4, 5G, the approximate development cycles/timeline along with distinctive changes in design thinking. We then postulate that 6 and 7G are likely to form a new line of development with 6G probably realised without any towers or any conventional cellular structure. In this context we also point out that there are no digital radios today, only traditional analogue designs with ‘strap-on-modems’ at the transmitter and receiver. Perhaps more radically, we suggest that it is time to adapt fully digital designs that allow for the eradication of the established bands and channels mode of operation.
We also chart the energy hungry progression of systems from 1 through 5G where tower installations are now consuming in excess of 10kW due to the extensive signal processing employed. This immediately debunks any notion of another step in the direction of more bandwidth, lower latency, greater coverage with >20x more towers (than 4G) and >250Bn power hungry smart devices. In short: we propose that 5G is the last of the line and the realisation of 6G demands new thinking and new modes that lead us away from W and mW to µW and nW wireless designs.
Whilst most of the technology required for 6G is available up to 300GHz, there remains one big channel in respect of the growing number of antennas per device and platform. Even for 3 - 5G + WiFi + BlueTooth space is at a premium in mobile devices and fractal antennas have not lived up to their promise too integrate all of these into one wideband structure. However, at 100GHz and above, antennas/dipoles become less than chip size and can see 10s included as phased arrays. But this all needs further work!
Throughout this lecture, we provide examples, demonstrations, and mind-experiments to support our assertions.
Every Industrial revolution has seen the progression from people dominated design, build and production to a higher degrees of automation that has gone hand-in-hand with shortening timescales enabled by ever-more powerful technologies. However, at a fundamental level the process has remained the same, but it is now edging toward a continuum of evolution as opposed to a series of discrete jumps that often trigger company reorganizations. In concert, there is a realization abroad that it is no longer about the biggest, the strongest, the best, or the fittest, it is now all about the survival of the most adaptable.
By and large it is relatively easy to predict when and where tech change will occur and the likely outcomes, in terms of existing and future products and services, but how people, customers, companies and societies will react is an unsolved puzzle. On another plane, competition and threats may well occur outside the sector, from a direction managers are not looking, by entirely new mechanisms, and at a most critical time. These are all challenges indeed!
How to adapt to, and cope with these collective challenges is the focus of this presentation which is illustrated and supported by past and present industrial cases along with the experiences and methodologies of those who have driven/weathered this storm as well as those who failed. Many of the illustrations are automated and there are exemplar movies and segue inserts throughout.
Industries 1.0, 2.0 (and most of) 3.0, saw manufacturing and construction using natural materials readily extracted, refined, amalgamated, machined, and molded. In general, these exhibited fixed mechanical, electrical, and chemical properties. However, the latter stages of Industry 3.0 embraced synthetics exhibiting superior properties to afford new degrees of freedom in the design of structures and products.
Today Industry 4.0 sees further advances with metamaterials, dynamic coatings, controllable properties, and additive manufacturing. Embedded smarts have also made communication between components, products and structures possible under the guise of the IoT. Adaptable materials with a degree of self-repair are also opening the door to further freedoms and less material use. In combination, these represent a big step toward sustainable societies with highly efficient ReUse, RePurposing, and Recycling (3R).
At the leading edge, we are now realising active surfaces that can reflect, absorb, or amplify wireless signals, offer programmable colour, and integral energy storage. But amongst a growing list of possibilities, it is integral sensing & communication that may define this new era. In this presentation, we look at these advances in the context of smart design, cities & societies.
Research involves solving problems through established methods such as consulting others, using the internet, or conducting a literature review. It allows people to establish the truth using evidence and has improved life through scientific advances like antibiotics. Research provides the basis for practices in fields like medicine and is important for continuous development, assessing effectiveness, solving issues, informing decisions, evaluating teaching methods, and advancing qualifications.
The Relationship Between Body Image And The MediaJessica Myers
Here are the key points I would highlight in an essay on this topic:
- The genetic revolution, specifically cloning, raised significant ethical concerns about interfering with nature and the sanctity of life. It challenged long-held views about what constitutes a "natural" birth.
- Cloning blurs the lines between animal and human life, raising questions about where to draw the line with genetic experimentation. Some fear a "slippery slope" towards human cloning if not regulated.
- There are also concerns that cloning could be used for eugenics or genetic enhancement of humans, allowing some to have "designer babies" with chosen traits while others do not. This raises issues of equality, ethics, and playing God.
Revenue Optimization: The Science of Sales and Customer Success - Julie Weill...Traction Conf
There is a science behind creating key moments that matter in your pipeline to double your revenue. So many companies today focus on generating new customers and they miss the key to the exponential revenue potential that comes from your existing customer base. After this session you will walk away with the Customer Success playbook team to reduce churn, maximize upsell and cross-sells, and drive customer referrals.
This document provides an overview of observational studies in evidence-based medicine, including cohort and case control studies. It defines cohort and case control studies, describes their key requirements and limitations. It also defines and describes odds ratio, relative risk, and absolute risk, and how to interpret these measures of effect. Examples are provided to illustrate mobile phone use and risk of brain cancer from the INTERPHONE study.
This document discusses people analytics and its benefits. People analytics uses data and analytics tools to inform decisions around managing employees, such as hiring, performance, compensation, and retention. It aims to minimize human biases and replace intuitive decision making with data-driven insights. Some benefits include increased employee engagement, identifying top performers and how to compose productive teams. The future of people analytics relies on developing a responsible culture of data use to avoid perceived surveillance and maintain employee trust.
Crowdsourcing platforms are revolutionizing research by providing a way to collect clinical and behavioral data with unprecedented speed and efficiency. This seminar explores another digital platform called TurkPrime that is designed to suuport research participant recruitment. TurkPrime is a relatively new panel service that allows researchers to target specific demographic groups. If you watched our previous webinar on Amazon’s Mechanical Turk, also known as MTurk, you may find it interesting that TurkPrime offers a proportional matching sampling approach rather than MTurk’s opt-in, convenience sampling approach. Tasks that can be implemented with TurkPrime include: excluding participants on the basis of previous participation, longitudinal studies, making changes to a study while it is running, automating the approval process, increasing the speed of data collection, sending bulk e-mails and bonuses, enhancing communication with participants, monitoring dropout and engagement rates, providing enhanced sampling options, and many others.
IRJET- Technology Related Anxiety- The Deepest Contributor to StressIRJET Journal
- The document discusses the relationship between technology and stress/anxiety. It conducted a questionnaire-based study among 80 subjects from various age groups and cities in India.
- The study found that excessive technology use can lead to issues like distraction, isolation, lack of sleep, and difficulty concentrating, which in turn can cause stress and anxiety. Heavy phone, email, and social media use was linked to increased feelings of irritation, tension, and lowered self-esteem.
- The data analysis found a high reliability value, indicating that dependence on and exposure to technology is directly proportional to reported technology-related anxiety and stress. Excessive technology use may disrupt adaptive behaviors and coping strategies.
The document outlines the steps of the PPDAC statistical inquiry cycle which includes problem, plan, data, analysis, and conclusion. It provides descriptions of the key elements and activities that occur at each step such as formulating a question, developing a plan to collect and measure data, analyzing and interpreting results, and drawing conclusions. Examples are also provided of elements that would be classified under each step of the cycle to illustrate how to apply the framework to an investigation.
This document summarizes the results of a simulation study that compared the ability of lifeguards and "patrol support" personnel to detect simulated drowning victims. The study found that while both groups' detection times improved after additional training, patrol support personnel were initially slower to detect drowning behaviors than traditionally trained lifeguards. Factors like age, sleep deprivation, and humidity/temperature were also found to impact detection times. Based on these results, the researchers question whether non-swimmers could effectively be used for surveillance duties to support lifeguards, and they plan to conduct further workplace studies on this topic.
PRACTICAL RESEARCH 1 LESSONs 11.-10pptxsherylduenas
The document discusses the importance and characteristics of research. It states that research directs inquiry, empowers people with knowledge, and facilitates learning. It then describes 7 key characteristics of research: empirical, logical, cyclical, analytical, critical, methodical, and replicable. Research utilizes proven analytical procedures and careful judgment in a systematic, methodical way without bias. The design and procedures can be repeated to arrive at valid results.
Sesión 3 y 4.
Los videos los puedes encontrar en: https://www.youtube.com/playlist?list=PLbDKbfkhV-YNqcR-tsN1o4SNNyLdirIwy
El canal de money adventure: https://www.youtube.com/channel/UC4gsdg96ZEoaFbIiigDOf6w?view_as=subscriber
The document describes different study designs for observational studies, including matching designs. It provides two examples of matching designs used to study the effects of hurricanes on online friendships and the effects of exercise on mental health using Twitter data. The hurricane study matched universities affected by a hurricane with unaffected universities on variables like size and ranking. The exercise study matched Twitter users who tweeted about exercising with similar users who did not exercise. The document also discusses using propensity score matching and difference-in-differences to study the effect of having an answer accepted on question answering sites like Stack Overflow.
The document discusses knowledge mining based on applications of methods and technologies for risk prediction. It presents a methodology for analyzing and optimizing quality and risks in a system's life cycle using probabilistic modeling and risk prediction. This includes defining quality and risk metrics, establishing acceptable quality and risk levels, analyzing system operation scenarios considering threats, and developing mathematical models for risk analysis. The methodology allows answering questions about meeting standards, achievable effects, risk levels of scenarios, and effective risk mitigation measures. Examples show how it can be applied to systems in various industries to predict quality and risks from data mining and monitoring.
Renowned gamification author, speaker and professor, Karl Kapp joins Axonify CEO Carol Leaman to explore current myths surrounding gamification in corporate learning.
To view the full recording, visit http://www.axonify.com/gamificationwebinar
This document discusses the need to study data science as a discipline through examining the processes, techniques, and outputs. It presents data science as consisting of iterative steps like forming hypotheses, collecting and analyzing data, and extracting results. Ontologies and platforms are proposed as tools to systematically describe datasets, licenses, models, and tasks. Case studies examine modeling data flows and understanding patterns in large data science systems. The document argues for an interdisciplinary approach and using techniques like science fiction to ensure data science is developed and applied responsibly through considering social and ethical implications.
Experimental design aims to describe or explain how variables change under hypothesized conditions. However, it has some weaknesses and issues. It can only examine the direct impact of one or two factors rather than complex relationships. Randomization removes the effects of other variables but also removes important contextual information. There are also threats to internal validity like history effects, maturation, testing, and selection bias. External validity can be undermined if samples are not representative or conditions are artificial. Practical challenges include how much to disclose to participants, sample sizes, recruitment methods, and ensuring interventions are applied consistently. Ethical issues involve voluntary and informed consent, avoiding harm, and maintaining anonymity and confidentiality.
This document is a thesis submitted by Elizabeth Jenkins to Howard University examining the effect of confidence in past performance on future performance. It provides background on underrepresentation of minorities in STEM fields and research showing stereotype threat and validation can negatively impact performance. The study hypothesized that high confidence in strong past performance would predict better future performance, while high confidence in poor past performance would predict worse future performance. 147 Black undergraduate students completed 2 math tests, evaluating their first performance and confidence. Results found high confidence in strong performance predicted better second test scores, while high confidence in poor performance predicted worse scores, supporting the hypothesis.
Research methods can generally be divided into two main categories: Quantitative and Qualitative. This webinar will provide an overview of quantitative methods with a brief distinction between quantitative and qualitative methods. We will focus on when and how to use quantitative research and discuss type of variables and statistical analysis.
Presentation will be led by Dr. Carlos Cardillo.
About CORE:
The Culture of Research and Education (C.O.R.E.) webinar series is spearheaded by Dr. Bernice B. Rumala, CORE Chair & Program Director of the Ph.D. in Health Sciences program in collaboration with leaders and faculty across all academic programs.
This innovative and wide-ranging series is designed to provide continuing education, skills-building techniques, and tools for academic and professional development. These sessions will provide a unique chance to build your professional development toolkit through presentations, discussions, and workshops with Trident’s world-class faculty.
For further information about CORE or to present, you may contact Dr. Bernice B. Rumala at Bernice.rumala@trident.edu
This document discusses various topics related to research methods, including:
- The difference between applied and basic research, with basic research focusing on expanding knowledge without immediate commercial goals and applied research seeking to solve specific problems.
- Key hallmarks of scientific research, such as having a clear purpose, rigorous methodology, testable hypotheses, replicable results, objective conclusions, and generalizability.
- The scientific method process of observation, hypothesis, prediction, experiment, and conclusion to systematically investigate topics and reach new understandings. Tomato plant growth in relation to sunlight is used as an example to explain the different steps.
What's the Science in Data Science? - Skipper SeaboldPyData
The gold standard for validating any scientific assumption is to run an experiment. Data science isn’t any different. Unfortunately, it’s not always possible to design the perfect experiment. In this talk, we’ll take a realistic look at measurement using tools from the social sciences to conduct quasi-experiments with observational data.
J. Paul Reed gave a presentation on approaches for implementing continuous delivery. He began with surveys to assess audience knowledge of continuous integration and delivery. He discussed the importance of the right people, tools, and processes for continuous integration. Reed then covered challenges that can arise when integrating streams of change and rethinking quality approaches. Throughout the hike analogy, he addressed common myths and misconceptions about continuous delivery. Reed emphasized that continuous delivery requires organizational commitment, focused investment, and increased transparency across teams.
My talk with Jim Kimball on the tyranny of the SLA; in it, we:
- Deconstruct the purpose of the service level agreement
- Discuss pitfalls of aspects of common SLA clauses, including how current SLAs inhibit the development of resilient systems and the cultivation of a DevOps culture
- Explore other potential SLA models that could foster healthier organizational behaviors and dynamics, and ultimately result in better technical outcomes and therefore business outcomes.
The Blameless Cloud: Bringing Actionable Retrospectives to SalesforceJ. Paul Reed
DevOps Enterprise Summit 2015 presentation with Kevina Finn-Braun, Director of SRE Management at Salesforce: this is the story of my months-long journey with Kevina and her team to identify the specifics of what made reliability retrospectives difficult to have, why actionable takeaways were often lacking, and how the feedback loops within the company’s operations organization weren’t serving Salesforce’s needs.
We then ran a series of experiments together, putting the SRE team on a road to improving their ability to respond, react, remediate, and reincorporate learnings from failure into the organization.
Tools, Culture, and Aesthetics: The Art of DevOpsJ. Paul Reed
My DevOps Days Tel Aviv keynote: In this talk, we will examine why these now school-aged ideals remain so difficult to implement, explore why DevOps is often described as "the movement that refuses to identify itself," and what your team can do to confront the dichotomies they are likely to face as they transform how they, their colleagues, and their company go about their daily work.
Fixing Your Org By Continually Breaking ItJ. Paul Reed
The document summarizes key points from a presentation by J. Paul Reed on continually breaking and fixing an organization to promote resilience. It discusses establishing a culture of continuous experimentation through small, targeted experiments aimed at cultural alignment rather than process replication. Experiments should specify target conditions rather than solutions and focus on revealing and removing obstacles over time through reflection.
This document summarizes J. Paul Reed's presentation at DevOps Days Rockies on April 23, 2015. The presentation discusses how both development and operations practices have evolved over time to incorporate better collaboration and automation. It emphasizes that the tools used are less important than adopting practices that move development and operations progressively to the right on a timeline, such as implementing infrastructure as code and treating infrastructure like cattle. The document also stresses that cultural alignment between development and operations is critical for DevOps success.
Has “DevOps” jumped the shark?
Some say yes; others say 2014 will be the year DevOps dons its Fonz-esque leather jacket. Whichever you believe, the marketing feeding frenzy has begun and the dilution of the “DevOps” concept to include everything (and simultaneously mean nothing) is palpable.
This talk deconstructs the meta-elements of DevOps that made it resonate so strongly with so many and allowed those familiar DevOps poster children—Netflix, Etsy, and others—to deploy the methodology with such success in their businesses. We’ll go beyond DevOps’ classical CAMS (culture, automation, metrics, and sharing) definition to discover what exactly what made DevOps relevant, and what about it is so timeless and foundational that it will make whatever-follows-DevOps relevant, too.
Is Your Team Instrument Rated? (Or Deploying 125,000 Times a Day)J. Paul Reed
J. Paul Reed's DevOpsDays Silicon Valley 2013 presentation "Is Your Team Instrument Rated?"
The presentation discusses the operational model similarities between the National Airspace System and a well-run software development shop that employs DevOps methodologies.
In the rapidly evolving landscape of technologies, XML continues to play a vital role in structuring, storing, and transporting data across diverse systems. The recent advancements in artificial intelligence (AI) present new methodologies for enhancing XML development workflows, introducing efficiency, automation, and intelligent capabilities. This presentation will outline the scope and perspective of utilizing AI in XML development. The potential benefits and the possible pitfalls will be highlighted, providing a balanced view of the subject.
We will explore the capabilities of AI in understanding XML markup languages and autonomously creating structured XML content. Additionally, we will examine the capacity of AI to enrich plain text with appropriate XML markup. Practical examples and methodological guidelines will be provided to elucidate how AI can be effectively prompted to interpret and generate accurate XML markup.
Further emphasis will be placed on the role of AI in developing XSLT, or schemas such as XSD and Schematron. We will address the techniques and strategies adopted to create prompts for generating code, explaining code, or refactoring the code, and the results achieved.
The discussion will extend to how AI can be used to transform XML content. In particular, the focus will be on the use of AI XPath extension functions in XSLT, Schematron, Schematron Quick Fixes, or for XML content refactoring.
The presentation aims to deliver a comprehensive overview of AI usage in XML development, providing attendees with the necessary knowledge to make informed decisions. Whether you’re at the early stages of adopting AI or considering integrating it in advanced XML development, this presentation will cover all levels of expertise.
By highlighting the potential advantages and challenges of integrating AI with XML development tools and languages, the presentation seeks to inspire thoughtful conversation around the future of XML development. We’ll not only delve into the technical aspects of AI-powered XML development but also discuss practical implications and possible future directions.
How to Get CNIC Information System with Paksim Ga.pptxdanishmna97
Pakdata Cf is a groundbreaking system designed to streamline and facilitate access to CNIC information. This innovative platform leverages advanced technology to provide users with efficient and secure access to their CNIC details.
Threats to mobile devices are more prevalent and increasing in scope and complexity. Users of mobile devices desire to take full advantage of the features
available on those devices, but many of the features provide convenience and capability but sacrifice security. This best practices guide outlines steps the users can take to better protect personal devices and information.
GraphSummit Singapore | The Art of the Possible with Graph - Q2 2024Neo4j
Neha Bajwa, Vice President of Product Marketing, Neo4j
Join us as we explore breakthrough innovations enabled by interconnected data and AI. Discover firsthand how organizations use relationships in data to uncover contextual insights and solve our most pressing challenges – from optimizing supply chains, detecting fraud, and improving customer experiences to accelerating drug discoveries.
Building RAG with self-deployed Milvus vector database and Snowpark Container...Zilliz
This talk will give hands-on advice on building RAG applications with an open-source Milvus database deployed as a docker container. We will also introduce the integration of Milvus with Snowpark Container Services.
Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Let's Integrate MuleSoft RPA, COMPOSER, APM with AWS IDP along with Slackshyamraj55
Discover the seamless integration of RPA (Robotic Process Automation), COMPOSER, and APM with AWS IDP enhanced with Slack notifications. Explore how these technologies converge to streamline workflows, optimize performance, and ensure secure access, all while leveraging the power of AWS IDP and real-time communication via Slack notifications.
Unlocking Productivity: Leveraging the Potential of Copilot in Microsoft 365, a presentation by Christoforos Vlachos, Senior Solutions Manager – Modern Workplace, Uni Systems
TrustArc Webinar - 2024 Global Privacy SurveyTrustArc
How does your privacy program stack up against your peers? What challenges are privacy teams tackling and prioritizing in 2024?
In the fifth annual Global Privacy Benchmarks Survey, we asked over 1,800 global privacy professionals and business executives to share their perspectives on the current state of privacy inside and outside of their organizations. This year’s report focused on emerging areas of importance for privacy and compliance professionals, including considerations and implications of Artificial Intelligence (AI) technologies, building brand trust, and different approaches for achieving higher privacy competence scores.
See how organizational priorities and strategic approaches to data security and privacy are evolving around the globe.
This webinar will review:
- The top 10 privacy insights from the fifth annual Global Privacy Benchmarks Survey
- The top challenges for privacy leaders, practitioners, and organizations in 2024
- Key themes to consider in developing and maintaining your privacy program
Generative AI Deep Dive: Advancing from Proof of Concept to ProductionAggregage
Join Maher Hanafi, VP of Engineering at Betterworks, in this new session where he'll share a practical framework to transform Gen AI prototypes into impactful products! He'll delve into the complexities of data collection and management, model selection and optimization, and ensuring security, scalability, and responsible use.
Cosa hanno in comune un mattoncino Lego e la backdoor XZ?Speck&Tech
ABSTRACT: A prima vista, un mattoncino Lego e la backdoor XZ potrebbero avere in comune il fatto di essere entrambi blocchi di costruzione, o dipendenze di progetti creativi e software. La realtà è che un mattoncino Lego e il caso della backdoor XZ hanno molto di più di tutto ciò in comune.
Partecipate alla presentazione per immergervi in una storia di interoperabilità, standard e formati aperti, per poi discutere del ruolo importante che i contributori hanno in una comunità open source sostenibile.
BIO: Sostenitrice del software libero e dei formati standard e aperti. È stata un membro attivo dei progetti Fedora e openSUSE e ha co-fondato l'Associazione LibreItalia dove è stata coinvolta in diversi eventi, migrazioni e formazione relativi a LibreOffice. In precedenza ha lavorato a migrazioni e corsi di formazione su LibreOffice per diverse amministrazioni pubbliche e privati. Da gennaio 2020 lavora in SUSE come Software Release Engineer per Uyuni e SUSE Manager e quando non segue la sua passione per i computer e per Geeko coltiva la sua curiosità per l'astronomia (da cui deriva il suo nickname deneb_alpha).
Essentials of Automations: The Art of Triggers and Actions in FMESafe Software
In this second installment of our Essentials of Automations webinar series, we’ll explore the landscape of triggers and actions, guiding you through the nuances of authoring and adapting workspaces for seamless automations. Gain an understanding of the full spectrum of triggers and actions available in FME, empowering you to enhance your workspaces for efficient automation.
We’ll kick things off by showcasing the most commonly used event-based triggers, introducing you to various automation workflows like manual triggers, schedules, directory watchers, and more. Plus, see how these elements play out in real scenarios.
Whether you’re tweaking your current setup or building from the ground up, this session will arm you with the tools and insights needed to transform your FME usage into a powerhouse of productivity. Join us to discover effective strategies that simplify complex processes, enhancing your productivity and transforming your data management practices with FME. Let’s turn complexity into clarity and make your workspaces work wonders!
Introducing Milvus Lite: Easy-to-Install, Easy-to-Use vector database for you...Zilliz
Join us to introduce Milvus Lite, a vector database that can run on notebooks and laptops, share the same API with Milvus, and integrate with every popular GenAI framework. This webinar is perfect for developers seeking easy-to-use, well-integrated vector databases for their GenAI apps.
Maruthi Prithivirajan, Head of ASEAN & IN Solution Architecture, Neo4j
Get an inside look at the latest Neo4j innovations that enable relationship-driven intelligence at scale. Learn more about the newest cloud integrations and product enhancements that make Neo4j an essential choice for developers building apps with interconnected data and generative AI.
A tale of scale & speed: How the US Navy is enabling software delivery from l...sonjaschweigert1
Rapid and secure feature delivery is a goal across every application team and every branch of the DoD. The Navy’s DevSecOps platform, Party Barge, has achieved:
- Reduction in onboarding time from 5 weeks to 1 day
- Improved developer experience and productivity through actionable findings and reduction of false positives
- Maintenance of superior security standards and inherent policy enforcement with Authorization to Operate (ATO)
Development teams can ship efficiently and ensure applications are cyber ready for Navy Authorizing Officials (AOs). In this webinar, Sigma Defense and Anchore will give attendees a look behind the scenes and demo secure pipeline automation and security artifacts that speed up application ATO and time to production.
We will cover:
- How to remove silos in DevSecOps
- How to build efficient development pipeline roles and component templates
- How to deliver security artifacts that matter for ATO’s (SBOMs, vulnerability reports, and policy evidence)
- How to streamline operations with automated policy checks on container images
8. HOW DO YOU KNOW
WHAT TO DO
WHEN AN INCIDENT
IS OCCURRING?
@jpaulreed #monitorama
9. J. PAUL REED
• @JPAULREED ON
• @SHIPSHOWPODCAST ALUM
• 15+ YEARS IN BUILD/RELEASE
ENGINEERING
• NOW, A DEVOPS CONSULTANT™
• MASTERS OF SCIENCE
CANDIDATE IN HUMAN FACTORS
AND SYSTEMS SAFETY
@jpaulreed #monitorama
10. HOW DO YOU KNOW
WHAT TO DO
WHEN AN INCIDENT
IS OCCURRING?
@jpaulreed #monitorama
11. Two Brain Systems
“Automatic” / Quick
Little to no effort
No sense of voluntary
control
“System One”
@jpaulreed #monitorama
12. Two Brain Systems
“Automatic” / Quick
Little to no effort
No sense of voluntary
control
“Effortful”
Complex computations
“Associated with the
subjective experience
of agency, choice, and
concentration”
“System One” “System Two”
@jpaulreed #monitorama
14. Two Problem Types
Orient to the source of a
sudden sound
Complete: “bread and…”
2 + 2 = ?
Find a strong move in chess
(but only if you’re a chess
master!)
Focus on a particular voice
in a crowded room
Count the occurrence of
the letter ‘a’ on this slide
Fill out a tax form
Check the validity of a
complex logical argument
“System One” “System Two”
@jpaulreed #monitorama
16. TRADE-OFFS UNDER PRESSURE:
HEURISTICS AND
OBSERVATIONS OF TEAMS
RESOLVING INTERNET SERVICE
OUTAGES
John Allspaw
LUND UNIVERSITY
SWEDEN
Date of submission: 2015-09-07
@jpaulreed #monitorama
17. “THE INCIDENT”
On December 4th, 2014,
during the busy holiday
shopping season, it was
reported at 1:06 PM EST
that the personalized
homepage for logged-in
users was experiencing
loading issues.
@jpaulreed #monitorama
19. Figure 19 - Infrastructure Engineer 1 timeline
Diagnostic
Activity
Taking
Action/Response
HOLD is placed on
the push queue
ProdEng1 re-enables
the sidebar,
with blog turned off
13:06:44 13:15:00 13:30:00 13:45:00 14:00:00 14:15:00 14:30:00
ProdEng2 turns off
homepage
sidebar module
HOLD is removed on
the push queue
Dashboard
Access
Staff Directory
Access
Princess
Requests
Production
Site Requests
@jpaulreed #monitorama
20. Software: A Team Sport
38
Figure 8 - Timeline view of utterances in IRC, by participant
Combined IRC utterances
@jpaulreed #monitorama
25. Heuristic #3: Convergent Searching
Confirm / Disqualify…
…that comes to mind by matching signals or
symptoms that appear similar
A specific and past diagnosis
A general and recent diagnosis
@jpaulreed #monitorama
26. Heuristic #3: Convergent Searching
Confirm / Disqualify…
…that comes to mind by matching signals or
symptoms that appear similar
A really painful incident-memory
An incident still in your L1 cache
@jpaulreed #monitorama
27. “THE INCIDENT”
The page load time
increase was caused by:
Figure 5 - Signed-in homepage with sidebar components
CDN cache misses…
Due to an HTTP 400
status in an API…
From a “closed store”…
Referenced by a blog post
in the sidebar
@jpaulreed #monitorama
28. IE2
PE2
IE5
IE1
IE1
PE3
IE3
PE3
PE3
ProdEng1 re-enables
the sidebar,
with blog turned off
ProdEng2 turns off
homepage
sidebar module
disable a
CDN?
Load
balancer
changes?
Network
changes?
Wordpress
issue?
Frozen shop?
Featured
shop?
PE1PE1
Varnish
queuing?
Featured
staff shop?
Sidebar loading
staff shop?
IE1IE1IE1IE1IE1IE1IE1
Varnish
not caching?
IE3
Database
schema change?
IE2 IE2
IE1Errors from
Homepage
sidebar
IE2400 response
code
IE2
PublicShops_GetShopCards
API method
PE3
Featured
shop loading
OK
IE2
“Shop 1234567
does not exist”
Varnish queuing,
not caching
400 responses?
Stated hypothesis
Critical relayed
observation
@jpaulreed #monitorama
30. b. 5 = I ALWAYS wait for tests to finish, I don't care how much time pressure
there is.
The results of question one were: 29 Yes, 3 No. (n=32)
The results of question two can be seen in Figure 18.
Figure 18 - Survey results: waiting for automated tests to finish
Some follow-up discussion with one of the respondents about the questions helped to provide
Bonus Heuristic: Testing the Fix
@jpaulreed #monitorama
31. b. 5 = I ALWAYS wait for tests to finish, I don't care how much time pressure
there is.
The results of question one were: 29 Yes, 3 No. (n=32)
The results of question two can be seen in Figure 18.
Figure 18 - Survey results: waiting for automated tests to finish
Some follow-up discussion with one of the respondents about the questions helped to provide
Bonus Heuristic: Testing the Fix
YOLO,
Every Day,
Twice on
Sundays?
@jpaulreed #monitorama
32. HOW DO YOU GET BETTER
AT DETECTING
AN INCIDENT
IS OCCURRING?
@jpaulreed #monitorama
35. HOW DO YOU
GET BETTER AT
KNOWING WHAT TO DO
WHEN AN INCIDENT
IS OCCURRING?@jpaulreed #monitorama
36. Elements of “Expertise”
Experts use their knowledge base to
Recognize typicality
Make fine discriminations
Use mental simulation
Knowledge base also used to apply higher level rules
@jpaulreed #monitorama
37. “Seeing the Invisible”
With experience, a person gains
the ability to visualize how a
situation developed and how to
imagine how it’s going to turn out.
Experts can see what is not there.
Seeing the Invisible: Perceptual-Cognitive Aspects of Expertise
Klein & Hoffman@jpaulreed #monitorama
40. “Yeah, but Malcolm Gladwell…”
Psychological Review
1993, Vol.100. No. 3, 363-406
Copyright 1993 by the American Psychological Association, Inc.
0033-295X/93/S3.00
The Role of Deliberate Practice in the Acquisition of Expert Performance
K. Anders Ericsson, Ralf Th. Krampe, and Clemens Tesch-Romer
The theoretical framework presented in thisarticle explainsexpert performanceasthe end resultof
individuals' prolonged efforts to improve performance while negotiatingmotivational and external
constraints. In most domains of expertise, individuals begin in their childhood a regimen of
effortful activities (deliberate practice) designed to optimize improvement. Individual differences,
even among elite performers, are closely related to assessed amounts of deliberate practice. Many
characteristics once believed to reflect innate talent are actually the result of intense practice
extended for a minimumof 10years. Analysisof expert performanceprovides uniqueevidence on
the potential and limitsof extreme environmental adaptation and learning.
Our civilization has always recognized exceptional individ-
uals, whose performance in sports, the arts, and science is
vastly superior to that of the rest of the population. Specula-
tions on the causes of these individuals' extraordinary abilities
and performanceare as old as the first records of their achieve-
ments. Early accounts commonly attribute these individuals'
outstanding performance to divine intervention, such as the
influence of the stars or organs in their bodies, or to special
gifts (Murray, 1989). As science progressed, these explanations
became less acceptable. Contemporary accounts assert that the
characteristics responsible for exceptional performance are in-
nate and are genetically transmitted.
The simplicity of these accounts is attractive, but more is
because observed behavior is the result of interactions between
environmental factors and genes during the extended period of
development. Therefore, to better understand expert and ex-
ceptional performance, we must require that the account spec-
ify the different environmental factors that could selectively
promote and facilitate the achievement ofsuch performance. In
addition, recent research on expert performance and expertise
(Chi, Glaser, & Farr, 1988; Ericsson &Smith, 199la) has shown
that important characteristics ofexperts' superior performance
are acquired through experience and that the effect of practice
on performance is larger than earlier believed possible. For this
reason, an account of exceptional performance must specify
the environmental circumstances, such as the duration and
@jpaulreed #monitorama
42. Expert Performance
Psychological Review
1993, Vol.100. No. 3, 363-406
Copyright 1993 by the American Psychological Association, Inc.
0033-295X/93/S3.00
The Role of Deliberate Practice in the Acquisition of Expert Performance
K. Anders Ericsson, Ralf Th. Krampe, and Clemens Tesch-Romer
The theoretical framework presented in thisarticle explainsexpert performanceasthe end resultof
individuals' prolonged efforts to improve performance while negotiatingmotivational and external
constraints. In most domains of expertise, individuals begin in their childhood a regimen of
effortful activities (deliberate practice) designed to optimize improvement. Individual differences,
even among elite performers, are closely related to assessed amounts of deliberate practice. Many
characteristics once believed to reflect innate talent are actually the result of intense practice
extended for a minimumof 10years. Analysisof expert performanceprovides uniqueevidence on
the potential and limitsof extreme environmental adaptation and learning.
Our civilization has always recognized exceptional individ-
uals, whose performance in sports, the arts, and science is
vastly superior to that of the rest of the population. Specula-
tions on the causes of these individuals' extraordinary abilities
and performanceare as old as the first records of their achieve-
ments. Early accounts commonly attribute these individuals'
outstanding performance to divine intervention, such as the
influence of the stars or organs in their bodies, or to special
gifts (Murray, 1989). As science progressed, these explanations
became less acceptable. Contemporary accounts assert that the
characteristics responsible for exceptional performance are in-
nate and are genetically transmitted.
The simplicity of these accounts is attractive, but more is
because observed behavior is the result of interactions between
environmental factors and genes during the extended period of
development. Therefore, to better understand expert and ex-
ceptional performance, we must require that the account spec-
ify the different environmental factors that could selectively
promote and facilitate the achievement ofsuch performance. In
addition, recent research on expert performance and expertise
(Chi, Glaser, & Farr, 1988; Ericsson &Smith, 199la) has shown
that important characteristics ofexperts' superior performance
are acquired through experience and that the effect of practice
on performance is larger than earlier believed possible. For this
reason, an account of exceptional performance must specify
the environmental circumstances, such as the duration and
@jpaulreed #monitorama
43. Expertise in Other Crafts
Immediately
starting the APU
Taking control of
the airplane
Not attempting to
land at La Guardia
Airport
@jpaulreed #monitorama
45. Transforming Experience
into Expertise
Personal Experiences: “the opportunity to be
continually challenged”
Directed Experiences: Receiving tutoring so as to
be able to tutor
Manufactured Experiences: training / simulation
Vicarious Experiences: painful / memorable events
we craft into stories we tell others
@jpaulreed #monitorama
46. Transforming Experience
into Expertise
Personal Experiences: “On-call”
Directed Experiences: Training / Code Review / Pair
Programming / Wikis+Runbooks
Manufactured Experiences: Chaos Engineering /
Game Days
Vicarious Experiences: “I remember this one
incident… where it was DNS.”
@jpaulreed #monitorama
56. Maslow’s SRE Hierarchy
Figure III-1. Service Reliability Hierarchy
Monitoring
Site Reliability Engineering: How Google Runs Production Systems
@jpaulreed #monitorama
57. Just Two Questions
Did at least one person learn
one thing that will affect how
they work in the future?
Did at least half of the
attendees say they would
attend another debrief in the
future?
Debriefing
Facilitation Guide
Leading Groups at Etsy to Learn From Accidents
Authors: John Allspaw, Morgan Evans, Daniel Schauenberg
@jpaulreed #monitorama
58. HOW DO YOU
GET BETTER AT
KNOWING WHAT TO DO
WHEN AN INCIDENT
IS OCCURRING?@jpaulreed #monitorama
59. CREATE SPACE & EXPERIENCES TO
FACILITATE THE CULTIVATION OF
OURSELVES AND OUR TEAMS SO AS
TO IMPROVE OUR HEURISTICS AT DETECTING
WEAK SIGNALS AND AMBIGUITY IN THE
COMPLEX SOCIO-TECHNICAL SYSTEMS WE
OPERATE AND IN WHICH WE EXIST
@jpaulreed #monitorama
65. Bibliography
Allspaw, J. (2015). Trade-offs under pressure: heuristics and observations of teams resolving
Internet service outages (Unpublished master’s thesis). Lund University, Lund, Sweden.
Allspaw, J., Evans, M., & Shauenberg, D. (2016). Debriefing facilitation guide: leading groups at
Etsy to learn from accidents. Retrieved January 23, 2017, from https://extfiles.etsy.com/
DebriefingFacilitationGuide.pdf
Beyer, B., Jones, C., Petoff, J., & Murphy, N. R. (Eds). (2016). Site reliability engineering: how
Google runs production systems. Sebastopol, California: O’Reilly Media.
Ericsson, K. A., Trampe, R. T., & Tesch-Römer, C. (1993). The role of deliberate practice in the
acquisition of expert performance. Psychological Review, 100(3), pp. 363-406.
Ericsson, K. A. (2013). Why expert performance is special and cannot be extrapolated from studies
of performance in the general population: a response to criticisms. Intelligence, 45, pp. 81-103.
@jpaulreed #monitorama
66. Bibliography
Gladwell, M. (2008). Outliers: the story of success. New York, New York: Little, Brown and
Company.
Kahneman, D. (2011). Thinking, fast and slow. New York, New York: Farrar, Straus and Giroux.
Klein, G. A., & Hoffman, R. R. (1992). Seeing the invisible: perceptual-cognitive aspects of
expertise. In M. Rabinowitz (Ed.), Cognitive science foundations of instruction (pp. 203-226).
Mahwah, New Jersey: Erlbaum.
Rasmussen, J. (1997). Risk management in a dynamic society: a modelling problem. Safety
Science, 27(2-3), pp. 183-213.
Sullenberger, C. & Zaslow, J. (2009). Highest duty: my search for what really matters. New
York, New York: Harper Collins.
@jpaulreed #monitorama