HATech helps clients get to market faster with online service updates, helping them to be more innovative and more competitive. Welcome to DevOps - it's a Culture that Transcends Business Boundaries.
Blockchain + Streaming Analytics with Ethereum and TIBCO StreamBase Kai Wähner
This slide deck shows why middleware and streaming analytics is relevant for any blockchain project. It discusses how to leverage stream processing and how to integrate with blockchain events. The focus was on integration of TIBCO StreamBase and Ethereum Blockchain. But the same can be done easily for any Hyperledger Blockchain like IBM's Fabric, IROHA or Intel's Sawtooth Lake, or others like R3 Corda or Ripple. For smart contract deployment, I use Browser Solidity and MetaMask. But the sasme can be achieved with TIBCO StreamBase (or BusinessWorks, too). The live demo can be watched on Youtube.
The outlook includes some upcoming topics like
- Live Visualization for Real Time Monitoring and Proactive Actions
- Cross-Integration with Ethereum and Hyperledger Blockchains
-Data Discovery for Historical Analysis to Find Insights and Patterns
- Machine Learning to Build of Analytic Models
- Application Integration with other Applications (Legacy, Cloud Services, …)
- Native Hardware Integration with Internet of Things Devices
Some use cases / real world examples:
- Banking: Data Discovery for compliance issues, fraud or other anomalies
- Stock / Energy Trading: Subcribe to events (e.g. price went over a threshold) – event correlation and proactive live UI
- Manufacturing / Internet of Things: Supply chain management with various partner companies (maybe even various blockchains)
- Many other use cases...
Thanks to my colleague Steven Warwick for implementing the StreamBase connectors and demo!
Machine Learning Applied to Real Time Scoring in Manufacturing and Energy Uti...Kai Wähner
Kai Wähner (@KaiWaehner) is a Technology Evangelist and Community Director at TIBCO Software - a leading provider of integration and analytics middleware. Kai is an experience guy in broad variety of topics like Big Data, Advanced Analytics & Machine Learning, he loves to write articles and blog about new technologies and make talks. The talk is about 3 different projects where Kai's team built analytic models with technologies R, Apache Spark or H2O.ai which were deployed to real time processing. The use cases include predictive maintenance in manufacturing but also fraud detection in banking and context-specific pricing in insurance. For one of the cases, Kai gonna show detailed steps will be, how it was built and deployed using supervised/unsupervised ML.
Talk was done together with my colleague Ankitaa Bhowmick.
DevSecCon London 2018: Is your supply chain your achille's heelDevSecCon
COLIN DOMONEY
The advent of DevOps and large scale automation of software construction and delivery has elevated the software supply chain – and its underpinning delivery pipeline – to mission critical status in any modern enterprise. The increased velocity of modern pipelines and the removal of manual checks and balances has meant that modern pipelines are potential single points of failure in the delivery of secure software.
Automotive and consumer electronics industries have long understood the need for both provenance (understanding the origin of materials) and veracity (ensuring the integrity of their manufacturing processes) in their supply chains; this presentation will address threats to software supply chains and practical approaches to reducing the fragility of your supply chain. Several examples of software supply chain failures will be presented and deconstructed to understand the typical failure modes.
At the most elementary level many pipelines are poorly constructed with low levels of repeatability and poor test coverage, in other organisations there is a lack of governance over the supply chain allowing careless or willingly negligent actors to subvert or bypass controls or testing within the pipeline. There is also no standard mechanism to ensure a ‘chain of custody’ within a pipeline due to a lack common interchange format between tools, or a standard manner to represent the steps within a pipeline build process.
This presentation will cover approaches (using ‘people and process’) in enforcing governance within a supply chain by describing best practices used in large-scale AppSec programmes. Several emerging technology initiatives will be presented: Google’s Grafeas is a means to ensure vulnerability information is represented in a uniform manner across all steps of a pipeline process, while In-Toto is a project to formally enforce the integrity of a pipeline process. A reference secure pipeline will be presented demonstrating both tools working in symphony, along with standard open source and commercial AppSec tools.
Finally the pipeline itself may become the Achille’s Heel in an organisation – many pipelines are not sufficiently hardened and are themselves open to attack by use of vulnerable components and their extensible nature, often along with very wide open permissions. Guidance will be given on hardening of typical pipelines, and a fully secured ephemeral Jenkins pipeline will be demonstrated.
Benefits of this Session: The attendee will gain an increased awareness of the pivotal importance of the software supply chain, and gain an understanding of some common failure modes and weaknesses. Most importantly the attendee will come away with practical guidance on enforcing higher levels of governance on their supply chain without reducing delivery velocity, as well as how to harden the pipeline infrastructure itself.
apidays LIVE New York 2021 - Microservice Authorization with Open Policy Agen...apidays
apidays LIVE New York 2021 - API-driven Regulations for Finance, Insurance, and Healthcare
July 28 & 29, 2021
Microservice Authorization with Open Policy Agent
Tim Hinrichs, Co-Founder and CTO at Styra
DevSecCon London 2018: Whatever happened to attack aware applications?DevSecCon
MATTHEW PENDLEBURY
Today’s security detection and response capabilities are usually focused on endpoints and network devices. Applications are often considered a distant cousin, more of a potential liability whose logs should be ingested into remote monitoring solutions such as SIEMs. Projects such as OWASP AppSensor however have loads of promise, putting the application back at the heart of attack detection and response, plus offering a really exciting opportunity to development teams. But these ideas have been around for more than 10 years, and AppSensor itself is getting close to this age, yet they still aren’t commonplace, why might this be? An attack aware application is one that can detect and report suspected malicious events, evaluate a series of events and take action if it suspects that series of events, that when considered together are malicious in nature. Examples of events may be a high number of login attempts over a period, a forceful browsing attempt or an obvious XSS string. Many of these events are routinely intercepted today by inline security appliances such as a Web Application Firewall (WAF). However, suspicious events may also be a lot more contextual to the application such as a change to a parameter that should not be changed. This context may not be available to an external device such as a WAF but it is to the application and this leads to the ability to generate very high-fidelity security alerting and opens the possibility of the application itself making pragmatic defensive choices.
A microservices architecture is not a new style of building large scale enterprise applications. Companies like Netflix and Amazon have implemented a microservices architecture to deliver successful products over the last few years.
But is a microservices architecture right for your organization? What should you focus on when getting started? How do microservices affect your business model?
In this presentation you will learn:
• What Microservices are
• Why use Microservices
• How to model Microservices
• How to automatically create your Microservice EA repository
• How to plan the Microservice roadmap
Blockchain + Streaming Analytics with Ethereum and TIBCO StreamBase Kai Wähner
This slide deck shows why middleware and streaming analytics is relevant for any blockchain project. It discusses how to leverage stream processing and how to integrate with blockchain events. The focus was on integration of TIBCO StreamBase and Ethereum Blockchain. But the same can be done easily for any Hyperledger Blockchain like IBM's Fabric, IROHA or Intel's Sawtooth Lake, or others like R3 Corda or Ripple. For smart contract deployment, I use Browser Solidity and MetaMask. But the sasme can be achieved with TIBCO StreamBase (or BusinessWorks, too). The live demo can be watched on Youtube.
The outlook includes some upcoming topics like
- Live Visualization for Real Time Monitoring and Proactive Actions
- Cross-Integration with Ethereum and Hyperledger Blockchains
-Data Discovery for Historical Analysis to Find Insights and Patterns
- Machine Learning to Build of Analytic Models
- Application Integration with other Applications (Legacy, Cloud Services, …)
- Native Hardware Integration with Internet of Things Devices
Some use cases / real world examples:
- Banking: Data Discovery for compliance issues, fraud or other anomalies
- Stock / Energy Trading: Subcribe to events (e.g. price went over a threshold) – event correlation and proactive live UI
- Manufacturing / Internet of Things: Supply chain management with various partner companies (maybe even various blockchains)
- Many other use cases...
Thanks to my colleague Steven Warwick for implementing the StreamBase connectors and demo!
Machine Learning Applied to Real Time Scoring in Manufacturing and Energy Uti...Kai Wähner
Kai Wähner (@KaiWaehner) is a Technology Evangelist and Community Director at TIBCO Software - a leading provider of integration and analytics middleware. Kai is an experience guy in broad variety of topics like Big Data, Advanced Analytics & Machine Learning, he loves to write articles and blog about new technologies and make talks. The talk is about 3 different projects where Kai's team built analytic models with technologies R, Apache Spark or H2O.ai which were deployed to real time processing. The use cases include predictive maintenance in manufacturing but also fraud detection in banking and context-specific pricing in insurance. For one of the cases, Kai gonna show detailed steps will be, how it was built and deployed using supervised/unsupervised ML.
Talk was done together with my colleague Ankitaa Bhowmick.
DevSecCon London 2018: Is your supply chain your achille's heelDevSecCon
COLIN DOMONEY
The advent of DevOps and large scale automation of software construction and delivery has elevated the software supply chain – and its underpinning delivery pipeline – to mission critical status in any modern enterprise. The increased velocity of modern pipelines and the removal of manual checks and balances has meant that modern pipelines are potential single points of failure in the delivery of secure software.
Automotive and consumer electronics industries have long understood the need for both provenance (understanding the origin of materials) and veracity (ensuring the integrity of their manufacturing processes) in their supply chains; this presentation will address threats to software supply chains and practical approaches to reducing the fragility of your supply chain. Several examples of software supply chain failures will be presented and deconstructed to understand the typical failure modes.
At the most elementary level many pipelines are poorly constructed with low levels of repeatability and poor test coverage, in other organisations there is a lack of governance over the supply chain allowing careless or willingly negligent actors to subvert or bypass controls or testing within the pipeline. There is also no standard mechanism to ensure a ‘chain of custody’ within a pipeline due to a lack common interchange format between tools, or a standard manner to represent the steps within a pipeline build process.
This presentation will cover approaches (using ‘people and process’) in enforcing governance within a supply chain by describing best practices used in large-scale AppSec programmes. Several emerging technology initiatives will be presented: Google’s Grafeas is a means to ensure vulnerability information is represented in a uniform manner across all steps of a pipeline process, while In-Toto is a project to formally enforce the integrity of a pipeline process. A reference secure pipeline will be presented demonstrating both tools working in symphony, along with standard open source and commercial AppSec tools.
Finally the pipeline itself may become the Achille’s Heel in an organisation – many pipelines are not sufficiently hardened and are themselves open to attack by use of vulnerable components and their extensible nature, often along with very wide open permissions. Guidance will be given on hardening of typical pipelines, and a fully secured ephemeral Jenkins pipeline will be demonstrated.
Benefits of this Session: The attendee will gain an increased awareness of the pivotal importance of the software supply chain, and gain an understanding of some common failure modes and weaknesses. Most importantly the attendee will come away with practical guidance on enforcing higher levels of governance on their supply chain without reducing delivery velocity, as well as how to harden the pipeline infrastructure itself.
apidays LIVE New York 2021 - Microservice Authorization with Open Policy Agen...apidays
apidays LIVE New York 2021 - API-driven Regulations for Finance, Insurance, and Healthcare
July 28 & 29, 2021
Microservice Authorization with Open Policy Agent
Tim Hinrichs, Co-Founder and CTO at Styra
DevSecCon London 2018: Whatever happened to attack aware applications?DevSecCon
MATTHEW PENDLEBURY
Today’s security detection and response capabilities are usually focused on endpoints and network devices. Applications are often considered a distant cousin, more of a potential liability whose logs should be ingested into remote monitoring solutions such as SIEMs. Projects such as OWASP AppSensor however have loads of promise, putting the application back at the heart of attack detection and response, plus offering a really exciting opportunity to development teams. But these ideas have been around for more than 10 years, and AppSensor itself is getting close to this age, yet they still aren’t commonplace, why might this be? An attack aware application is one that can detect and report suspected malicious events, evaluate a series of events and take action if it suspects that series of events, that when considered together are malicious in nature. Examples of events may be a high number of login attempts over a period, a forceful browsing attempt or an obvious XSS string. Many of these events are routinely intercepted today by inline security appliances such as a Web Application Firewall (WAF). However, suspicious events may also be a lot more contextual to the application such as a change to a parameter that should not be changed. This context may not be available to an external device such as a WAF but it is to the application and this leads to the ability to generate very high-fidelity security alerting and opens the possibility of the application itself making pragmatic defensive choices.
A microservices architecture is not a new style of building large scale enterprise applications. Companies like Netflix and Amazon have implemented a microservices architecture to deliver successful products over the last few years.
But is a microservices architecture right for your organization? What should you focus on when getting started? How do microservices affect your business model?
In this presentation you will learn:
• What Microservices are
• Why use Microservices
• How to model Microservices
• How to automatically create your Microservice EA repository
• How to plan the Microservice roadmap
Anomaly Detection using ML in Elisa Viihde CDNEficode
Jere Nieminen
Service Architect – Elisa
Jere is experienced architect specialized in video streaming technologies. He is currently working on making video streaming as smooth as possible for Elisa Viihde customers.
Discover in this webcast how AI-based solutions mitigate fraud, reduce losses, and help businesses remain competitive.
In this webcast you will learn:
- The main digital tools to mitigate fraud.
- Rule-based tools can be augmented by Machine Learning. How does this work? What are the advantages?
- Insights on how to implement ML-driven fraud monitoring solution.
- The challenges when working with AI-Based Fraud Management solutions.
Discover the Webcast: https://bit.ly/359lsu9
DevSecCon London 2018: How to fit threat modelling into agile development: sl...DevSecCon
IRENE MICHLIN, workshop
The earlier in the lifecycle you pay attention to security, the better are the outcomes. Threat modelling is one of the best techniques for improving the security of your software. It is a structured method for identifying weaknesses on design level. However, people who want to introduce it into their work on existing codebase often face time pressure and very rarely can a company afford “security push”, where all new development stops for a while in order to focus on security. Incremental threat modelling that concentrates on current additions and modifications can be time-boxed to fit the tightest of agile life-cycles and still deliver security benefits. Full disclosure is necessary at this point – threat modelling is not the same as adding tests to the ball of mud codebase and eventually getting decent test coverage. You will not be able to get away with doing just incremental modelling, without tackling the whole picture at some point. But the good news are you will approach this point with more mature skills from getting the practice, and you will get a better overall model with less time spent than if you tried to build it upfront. We will cover the technique of incremental threat modelling, and then the workshop will split into several teams, each one modelling an addition of a new feature to a realistic architecture. The participants will learn how to find the threats relevant to the feature while keeping the activity focused (i.e. not trying to boil an ocean). This session targets mainly developers, qa engineers, and architects, but will be also beneficial for scrum masters and product owners.
Introducing Kisi Pro: Powerful & Reliable Door ControlKISI Inc
For the past months we've been working hard to build our newest hardware, and this webinar showcases our latest Kisi Pro Door Controller.
- The Future of IT: Office Automation Management
- Our Motivation for Engineering New Hardware
- Introducing Kisi Pro: Powerful & Reliable Door Control
- Get a Kisi Pro Controller in Your Office
- Q+A Session
What does it take to break out of an IoT Proof of Concept and deploy an enterprise grade IoT Solution? This slideshare is an extract from a live talk presented by Bridgera.
Red Hat Forum Poland 2019 - Red Hat Open Hybrid Cloud (keynote)Eric D. Schabell
Keynote presented at Red Hat Forum in Poland in Nov 2019: Notice in the title here we are talking about “working together”, a very, very important theme in this story today. Let’s take a journey through the reality that is facing organizations today and that’s a reality based on the open hybrid cloud in your future.
(Internal original slides: https://docs.google.com/presentation/d/1Fd6EnGhRN0OAWeQqaG-LDADoP2k5psmkEioZIIepv0E)
Gartner 2017 London: How to re-invent your IT Architecture?LeanIX GmbH
Slides of the Talk "LeanIX: IT Modernization in Action: How to Re-Invent Your IT Architecture?" at the Gartner Enterprise Architecture & Technology Innovation Summit in London by LeanIX Co-CEO André Christ
Microservices: Keep Complexity under Control with LeanIX Enterprise Architect...LeanIX GmbH
IT trends like Docker and microservices promise great advantages but cause a new complexity. How can agile teams act autonomously and still create great products? Learn why agile teams, microservices and Docker complement each other and lead to a more flexible IT.
===
LeanIX offers an innovative software-as-a-service solution for Enterprise Architecture Management (EAM), based either in a public cloud or the client’s data center.
Companies like Adidas, Axel Springer, Helvetia, RWE, Trusted Shops and Zalando use LeanIX Enterprise Architecture Management tool.
Free Trial: http://bit.ly/LeanIXFreeTrial
Title: What is Elastic?
Elastic is a search company. If you have used Uber, Tinder, Yelp... You have used Elastic! We have a wide range of use cases: from logging to security, all with the same stack.
What is a thing of the IoT? Aspiration of things narrated by a 'Thing Interpr...Pratik Desai, PhD
The vision of connecting every networked computer with each other created the Internet we use today from a research project, which got possible because of the open Internet standard and a tangible architecture. In the chaos of buzzwords and marketing campaigns, the Interoperating between connected devices, Things, has been compromise, suffocating the growth of the Internet of Things domain. The interoperability between wearable devices and other IoT components can lead to development of high intelligence applications enabling non-hardware entities to be part of the wearable domain. We propose a semantic web assisted IoT architecture, which implements standard data models described in relationship graphs. The graph based data structure enables reasoning and intelligence at the machine level laying down road for innovations.
Most companies rely on multiple applications and systems to carry out day-to-day operations. However as this system footprint gets larger, there is a tendency for it to be more disconnected from each other, resulting in manual intervention, data duplication and rework. In this webinar, Shevan Goonetilleke, chief operating officer at WSO2 will discuss several such examples within WSO2 and how operational processes were optimized through simple integrations.
Internet of Things security is at its infancy but so was internet security not so long ago. We will overcome this challenge. Learn more about security requirements of an IOT System.
Blockchain and Distributed Ledger Technologies: An EU Policy PerspectiveITU
• Digital Single Market-ICT Standards priorities
• Blockchain and financial markets
• European Parliament contributions
• The FinTechTask Force
• Application areas for blockchain
• EU initiatives
Author : Benoit Abeloos, EC, DG CNECT, Startups and
Innovation Unit
Anomaly Detection using ML in Elisa Viihde CDNEficode
Jere Nieminen
Service Architect – Elisa
Jere is experienced architect specialized in video streaming technologies. He is currently working on making video streaming as smooth as possible for Elisa Viihde customers.
Discover in this webcast how AI-based solutions mitigate fraud, reduce losses, and help businesses remain competitive.
In this webcast you will learn:
- The main digital tools to mitigate fraud.
- Rule-based tools can be augmented by Machine Learning. How does this work? What are the advantages?
- Insights on how to implement ML-driven fraud monitoring solution.
- The challenges when working with AI-Based Fraud Management solutions.
Discover the Webcast: https://bit.ly/359lsu9
DevSecCon London 2018: How to fit threat modelling into agile development: sl...DevSecCon
IRENE MICHLIN, workshop
The earlier in the lifecycle you pay attention to security, the better are the outcomes. Threat modelling is one of the best techniques for improving the security of your software. It is a structured method for identifying weaknesses on design level. However, people who want to introduce it into their work on existing codebase often face time pressure and very rarely can a company afford “security push”, where all new development stops for a while in order to focus on security. Incremental threat modelling that concentrates on current additions and modifications can be time-boxed to fit the tightest of agile life-cycles and still deliver security benefits. Full disclosure is necessary at this point – threat modelling is not the same as adding tests to the ball of mud codebase and eventually getting decent test coverage. You will not be able to get away with doing just incremental modelling, without tackling the whole picture at some point. But the good news are you will approach this point with more mature skills from getting the practice, and you will get a better overall model with less time spent than if you tried to build it upfront. We will cover the technique of incremental threat modelling, and then the workshop will split into several teams, each one modelling an addition of a new feature to a realistic architecture. The participants will learn how to find the threats relevant to the feature while keeping the activity focused (i.e. not trying to boil an ocean). This session targets mainly developers, qa engineers, and architects, but will be also beneficial for scrum masters and product owners.
Introducing Kisi Pro: Powerful & Reliable Door ControlKISI Inc
For the past months we've been working hard to build our newest hardware, and this webinar showcases our latest Kisi Pro Door Controller.
- The Future of IT: Office Automation Management
- Our Motivation for Engineering New Hardware
- Introducing Kisi Pro: Powerful & Reliable Door Control
- Get a Kisi Pro Controller in Your Office
- Q+A Session
What does it take to break out of an IoT Proof of Concept and deploy an enterprise grade IoT Solution? This slideshare is an extract from a live talk presented by Bridgera.
Red Hat Forum Poland 2019 - Red Hat Open Hybrid Cloud (keynote)Eric D. Schabell
Keynote presented at Red Hat Forum in Poland in Nov 2019: Notice in the title here we are talking about “working together”, a very, very important theme in this story today. Let’s take a journey through the reality that is facing organizations today and that’s a reality based on the open hybrid cloud in your future.
(Internal original slides: https://docs.google.com/presentation/d/1Fd6EnGhRN0OAWeQqaG-LDADoP2k5psmkEioZIIepv0E)
Gartner 2017 London: How to re-invent your IT Architecture?LeanIX GmbH
Slides of the Talk "LeanIX: IT Modernization in Action: How to Re-Invent Your IT Architecture?" at the Gartner Enterprise Architecture & Technology Innovation Summit in London by LeanIX Co-CEO André Christ
Microservices: Keep Complexity under Control with LeanIX Enterprise Architect...LeanIX GmbH
IT trends like Docker and microservices promise great advantages but cause a new complexity. How can agile teams act autonomously and still create great products? Learn why agile teams, microservices and Docker complement each other and lead to a more flexible IT.
===
LeanIX offers an innovative software-as-a-service solution for Enterprise Architecture Management (EAM), based either in a public cloud or the client’s data center.
Companies like Adidas, Axel Springer, Helvetia, RWE, Trusted Shops and Zalando use LeanIX Enterprise Architecture Management tool.
Free Trial: http://bit.ly/LeanIXFreeTrial
Title: What is Elastic?
Elastic is a search company. If you have used Uber, Tinder, Yelp... You have used Elastic! We have a wide range of use cases: from logging to security, all with the same stack.
What is a thing of the IoT? Aspiration of things narrated by a 'Thing Interpr...Pratik Desai, PhD
The vision of connecting every networked computer with each other created the Internet we use today from a research project, which got possible because of the open Internet standard and a tangible architecture. In the chaos of buzzwords and marketing campaigns, the Interoperating between connected devices, Things, has been compromise, suffocating the growth of the Internet of Things domain. The interoperability between wearable devices and other IoT components can lead to development of high intelligence applications enabling non-hardware entities to be part of the wearable domain. We propose a semantic web assisted IoT architecture, which implements standard data models described in relationship graphs. The graph based data structure enables reasoning and intelligence at the machine level laying down road for innovations.
Most companies rely on multiple applications and systems to carry out day-to-day operations. However as this system footprint gets larger, there is a tendency for it to be more disconnected from each other, resulting in manual intervention, data duplication and rework. In this webinar, Shevan Goonetilleke, chief operating officer at WSO2 will discuss several such examples within WSO2 and how operational processes were optimized through simple integrations.
Internet of Things security is at its infancy but so was internet security not so long ago. We will overcome this challenge. Learn more about security requirements of an IOT System.
Blockchain and Distributed Ledger Technologies: An EU Policy PerspectiveITU
• Digital Single Market-ICT Standards priorities
• Blockchain and financial markets
• European Parliament contributions
• The FinTechTask Force
• Application areas for blockchain
• EU initiatives
Author : Benoit Abeloos, EC, DG CNECT, Startups and
Innovation Unit
Building decentralized apps: Battle of the tech stacksBlockStars.io
A talk given at Bitcoin Meetup Berlage Meet & Workspace, Amsterdam, Netherlands, on June 18th, 2015, Aron van Ammers.
Like Bitcoin is "decentralized money", decentralized applications or ÐApps promise to enable "decentralized everything".
Developing those applications requires technology stacks to build on. A great amount of projects have taken on the task to build those stacks in part or in full, approaching the problems to be solved from different angles. These projects include Ethereum, the Eris platform, Counterparty, Colored Coins, Maidsafe, Codius, Tendermint and others.
In the the, Aron gave a broad overview of the technology landscape for decentralized applications as it stands today. He compared some prominent technology stacks and described some of the challenges and strategies with decentralized development.
This presentation was delivered by Pink Elephant for the launch of the DevOps Certification framework in Asia. During the two-hour breakfast session, speakers Jan-Willem Middelburg and Karen Chua explained the business case for DevOps and provided an overview of the DevOps Certification Scheme of the DevOps Agile Skills Association (DASA).
DevOps is a culture, movement or practice that emphasizes the collaboration and communication between all relevant information-technology (IT) professionals to deliver high quality, valuable IT services to customers. It aims to improve the performance of the IT services through establishing flow in the delivery of all aspects of the IT service. Which means creating a culture, organization, and environment in which the building, testing, releasing and supporting software and infrastructure changes can happen rapidly, frequently and more reliably, often through extensive automation.
CBGTBT - Part 1 - Workshop introduction & primerBlockstrap.com
A Complete Beginners Guide to Blockchain Technology Part 1 of 6. Slides from the #StartingBlock2015 tour by @blockstrap
Part 1: http://www.slideshare.net/Blockstrap/cbgtbt-part-1-workshop-introduction-primer
Part 2: http://www.slideshare.net/Blockstrap/02-blockchains-101
Part 3: http://www.slideshare.net/Blockstrap/03-transactions-101
Part 4: http://www.slideshare.net/Blockstrap/cbgtbt-part-4-mining
Part 5: http://www.slideshare.net/Blockstrap/05-blockchains-102
Part 6: http://www.slideshare.net/Blockstrap/06-transactions-102
HashRoot is a technology company providing IT services to its global client base since 2009. We focus on Server Management, Application Development, Website Development, Mobile App Development, Infrastructure Management, Cloud Management, Digital Marketing and Security Services. HashRoot has provided services to 750+ clients in 80+ countries with 24/7 support. We help save time, reduce costs on technology development while our clients concentrate on their core business. HashRoot is an ISO certified company and winner of 5 international awards. We have offices in Kochi, US and UK.
We modernize your legacy systems with a scalable architecture leveraging emerging technologies, by disrupting & re-imagining your business processes. You can reach the digital natives and millennials with ease.
Nvent Enabling The Data Driven EnterpriseGrafic.guru
Nvent helps customers achieve long term business success, our consultants have a solid track record and have the highest customer satisfaction ratings in the industry.
Case Study: SunTrust’s Next Gen QA and Release Services Transformation JourneyCA Technologies
Sun Trust’s journey from challenge identification, charter definition, execution practices and key metrics and results in their transformation of traditional QA and Release functions into a more cohesive, collaborative and “continuous” model.
For more information, please visit http://cainc.to/Nv2VOe
Maximize Your Enterprise DevOps Efforts and Outcomes with Value StreamsDevOps.com
Enterprise software organizations need to modernize and transform their software development processes to gain and keep their competitive advantage by accelerating delivery of business value to customers to meet the market and customer demands. Cloud services, big data/analytics, mobile devices and apps, artificial intelligence, automation and other emerging innovations can help businesses achieve this success.
Join the editor in chief of DevOps, Alan Shimel, and Eric Robertson, Vice President Product Management from CollabNet, in a live chat session that will provide you with the valuable insights needed for Value Stream Management (VSM) for enterprises to stay ahead in today’s market. You will learn more about:
How VSM Relates to DevOps
How VSM Benefits Business and Technology Stakeholders
The Specific Advantages of VSM
How VSM Applies to the Emerging Internet of Things
Implementing DevOps goes beyond tools, technology, and delivery teams. To do DevOps well, requires leadership buy-in, policies, metrics, and organizational alignment. You can assess your organization’s DevOps readiness by downloading MetroStar's DevOps guide to learn more.
Gone are the days of static, bulky, “authorized” content. Today's information consumers are global and more connected to communities instead of static help, manuals, and knowledge bases.
DocOps is about creating a content supply chain. Collaborative, agile and continuous, DocOps lets teams "curate" content from internal and external authors throughout the product lifecycle. Content can be delivered on the web where consumers can create their desired output for only those articles they require. Content can also be linked to application UIs to provide dynamic, collaborative, assistance. Analytics provide curators with continuous operational feedback so they can make changes on the fly to improve the customer experience.
More and more organizations are turning to DevOps as a way of working together to improve the efficiency and quality of software delivery and start adding more value to the business. But what exactly is DevOps and what does it mean for you and your organization?
Join Microsoft Data Platform MVP Kendra Little to discover:
• What is DevOps and what benefits can it offer your organization?
• Who in your organization should be involved in DevOps?
• Why should your organization adopt DevOps?
• How can your organization start implementing DevOps?
Similar to HATech DevOps Services general introduction (20)
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
In the rapidly evolving landscape of technologies, XML continues to play a vital role in structuring, storing, and transporting data across diverse systems. The recent advancements in artificial intelligence (AI) present new methodologies for enhancing XML development workflows, introducing efficiency, automation, and intelligent capabilities. This presentation will outline the scope and perspective of utilizing AI in XML development. The potential benefits and the possible pitfalls will be highlighted, providing a balanced view of the subject.
We will explore the capabilities of AI in understanding XML markup languages and autonomously creating structured XML content. Additionally, we will examine the capacity of AI to enrich plain text with appropriate XML markup. Practical examples and methodological guidelines will be provided to elucidate how AI can be effectively prompted to interpret and generate accurate XML markup.
Further emphasis will be placed on the role of AI in developing XSLT, or schemas such as XSD and Schematron. We will address the techniques and strategies adopted to create prompts for generating code, explaining code, or refactoring the code, and the results achieved.
The discussion will extend to how AI can be used to transform XML content. In particular, the focus will be on the use of AI XPath extension functions in XSLT, Schematron, Schematron Quick Fixes, or for XML content refactoring.
The presentation aims to deliver a comprehensive overview of AI usage in XML development, providing attendees with the necessary knowledge to make informed decisions. Whether you’re at the early stages of adopting AI or considering integrating it in advanced XML development, this presentation will cover all levels of expertise.
By highlighting the potential advantages and challenges of integrating AI with XML development tools and languages, the presentation seeks to inspire thoughtful conversation around the future of XML development. We’ll not only delve into the technical aspects of AI-powered XML development but also discuss practical implications and possible future directions.
Communications Mining Series - Zero to Hero - Session 1DianaGray10
This session provides introduction to UiPath Communication Mining, importance and platform overview. You will acquire a good understand of the phases in Communication Mining as we go over the platform with you. Topics covered:
• Communication Mining Overview
• Why is it important?
• How can it help today’s business and the benefits
• Phases in Communication Mining
• Demo on Platform overview
• Q/A
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
How to Get CNIC Information System with Paksim Ga.pptxdanishmna97
Pakdata Cf is a groundbreaking system designed to streamline and facilitate access to CNIC information. This innovative platform leverages advanced technology to provide users with efficient and secure access to their CNIC details.
zkStudyClub - Reef: Fast Succinct Non-Interactive Zero-Knowledge Regex ProofsAlex Pruden
This paper presents Reef, a system for generating publicly verifiable succinct non-interactive zero-knowledge proofs that a committed document matches or does not match a regular expression. We describe applications such as proving the strength of passwords, the provenance of email despite redactions, the validity of oblivious DNS queries, and the existence of mutations in DNA. Reef supports the Perl Compatible Regular Expression syntax, including wildcards, alternation, ranges, capture groups, Kleene star, negations, and lookarounds. Reef introduces a new type of automata, Skipping Alternating Finite Automata (SAFA), that skips irrelevant parts of a document when producing proofs without undermining soundness, and instantiates SAFA with a lookup argument. Our experimental evaluation confirms that Reef can generate proofs for documents with 32M characters; the proofs are small and cheap to verify (under a second).
Paper: https://eprint.iacr.org/2023/1886
In his public lecture, Christian Timmerer provides insights into the fascinating history of video streaming, starting from its humble beginnings before YouTube to the groundbreaking technologies that now dominate platforms like Netflix and ORF ON. Timmerer also presents provocative contributions of his own that have significantly influenced the industry. He concludes by looking at future challenges and invites the audience to join in a discussion.
Pushing the limits of ePRTC: 100ns holdover for 100 daysAdtran
At WSTS 2024, Alon Stern explored the topic of parametric holdover and explained how recent research findings can be implemented in real-world PNT networks to achieve 100 nanoseconds of accuracy for up to 100 days.
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdfPaige Cruz
Monitoring and observability aren’t traditionally found in software curriculums and many of us cobble this knowledge together from whatever vendor or ecosystem we were first introduced to and whatever is a part of your current company’s observability stack.
While the dev and ops silo continues to crumble….many organizations still relegate monitoring & observability as the purview of ops, infra and SRE teams. This is a mistake - achieving a highly observable system requires collaboration up and down the stack.
I, a former op, would like to extend an invitation to all application developers to join the observability party will share these foundational concepts to build on:
GridMate - End to end testing is a critical piece to ensure quality and avoid...ThomasParaiso2
End to end testing is a critical piece to ensure quality and avoid regressions. In this session, we share our journey building an E2E testing pipeline for GridMate components (LWC and Aura) using Cypress, JSForce, FakerJS…
HATech is first and foremost in the business of helping our clients get to market faster with their online service updates, which leads to them being more innovative and therefore more competitive.
We approach this through what we call a DevOps Transformation Process which involves breaking down the barriers between leadership, product, development, and operations teams. We teach them how to collaborate effectively. Above all else, aligning every-day activities very tightly with the business’ needs.
As a full-service DevOps agency, we provide all the necessary expertise, technology skills and toolchains to make this happen, but we also focus is on the challenging cultural changes that need to take place within the organisation to allow transformation to take place.
We embed a culture of openness and collaboration that creates an environment where alignment of business goals, innovation, effectiveness and competitiveness are all nurtured and their supporting processes and toolchains are matured through an ongoing iterative process.
Our goal is always to build long term, ongoing partnerships with our clients, keeping their DevOps transformation on track and benefitting from ongoing process, knowledge and tooling innovation generated across HATech’s partner-client ecosystem.
We’re based in Las Vegas, and have presences in Reno, Malta (in the EU) with a 24x7 managed services support team in Belgrade (Serbia, SEE)
We’re a relatively young business, we opened our doors in 2015 but we represent many years and a great deal of experience working with and for some very cool startups, best-in-class SMEs, as well as some very big names including IGT and CA Technologies.
Since we started operating, we’ve expanded our team from 2 co-founders to 8 team members and we expect to increase to 20 by the end of 2016.
From day 1 we’ve been financed by customer project income which we’re particularly proud of as it validates the value of our offering and allows us to focus on growing the business in line with our very specific vision of the future of DevOps as we contribute to making that vision a reality.
Our core consulting services focus on DevOps-oriented business transformation, helping businesses to be more Innovative and Competitive..
Our Engineering services focus on providing full-stack engineering augmentation to customer teams, with a good blend of generalists, specific domain expertise and out-of-the-box thinkers.
A key component of our success is our 24/7 global Managed Service capability. This blends resources across experience levels & geographic locations to ensure a timely response to customer issues around the clock. The service includes DataDog monitoring and OpsGenie alerting and resolution management combined with our JIRA-based ticketing and customer-facing knowledge management portal.
One area of business that we’re very excited about is our DevOps Pipeline Manager. This combines our constantly-evolving automation patterns into a cloud-based service which manages the entire DevOps pipeline across many cloud platforms including hybrid deployments.
Our deployment patterns rely heavily on Self-Learning Environments and Automation to ensure that the entire process requires a minimum of hands-on attention. We work a lot with container-based microservice Architectures, primarily based on Docker. Together, these principles are the key to achieving Secure, Reliable, Repeatable and Scalable cloud deployments.
HATech is vendor-neutral and is driven solely by a passion to find the right solution to answer your business needs. It doesn’t matter to us which platform you’re moving from or to, or which services you need to combine, or what technology you’re using. Our technology agnostic approach allows us to apply ‘Patterns & Tooling’ that we develop and continually update as we build out new features and capabilities, through internal product development and future client engagements.
Reselling individual vendor technologies is not a driver for our business model. That being said, our active partnerships reflect the hybrid and diverse nature, and our expertise within today’s DevOps, cloud, containers & microservices technology landscape.
Needless to say, integrating toolchains and automating hybrid pipelines towards Continuous Delivery is a big part of every client engagement that we work on. Our Hybrid-centric vision means that we are focused on recommending the right tooling combinations and platform choices for our clients on a case-by-case basis.
Our Transformation process begins with a Discovery Phase where we begin by clearly articulating the business needs. Through a series of assessments, we identify cultural challenges and silo targets, existing technical capabilities and how to maximize them. We then carry out an exploratory dive into the client’s applications and development pipeline. We provide a report across the four focus areas of Culture, People, Technology & Processes together with an overview of the current DevOps maturity baseline.
Having gained a clear overview of the business, The Analyze stage is all about working out where we can add real value with quick results. We’ll identify how we can drive cultural changes, bridge skills gaps, mitigate technical debt and eliminate technical and organisational weaknesses through pragmatic change.We then deliver a detailed maturity plan with defined goals. We also help define processes and responsibilities between the client, HATech and any 3rd parties involved.
We then Design a change path that clearly defines the steps for the client to move forward, focusing on people, technology, products, architecture and processes. Our top-down milestone driven plans are absolutely focused on meeting the business’ needs identified during the Discovery phase and we use an agile approach to defining success criteria at each development stage.
During the Execution phase, we operate to a sprint driven development process coordinated by one of our scrum masters with the client as the product owner. Architectural decisions, development, testing, documentation and releases, are iterated with total visibility allowing rapid direction changes to help our clients stay highly agile and competitive.
We Empower our customers by providing continual training and mentoring leading up to handover where the client takes over running the day to day processes. At the end of an engagement a full runbook is shared detailing the processes and procedures to help the client remain successful. When relevant to that continued success we also work with clients to develop a recruitment plan and/or an HATech managed services plan to provide ongoing support for their non-core activities.
Our services generally do not end at handover. Through our managed services and ongoing consultancy we continue to contribute strategic advice, tooling and automation pattern updates and access to our evolving knowledge base driven by internal R&D and contributions from our ecosystem of client-partners. Our goal is always to build long term, ongoing partnerships with our clients, keeping their DevOps transformation on track and benefitting from ongoing process, knowledge and tooling innovation.
Within a cultural framework, HATech defines DevOps as the ability to deliver from Product vision, with reliability, into a production environment multiple times per day, in an automated and repeatable fashion. Or in other words, Continuous Delivery.
Transitioning to this state is not easy and requires the organization to move through multiple stages before it arrives at this level of agility and automation. Whether the goal is to adopt a true DevOps pattern for a particular product, or bring the whole organisation up to the same level of maturity at the same time, our vision for most clients is to execute in line with an agreed maturity model starting at a high level with the ‘Continuous What?’ Model.
Embedding a DevOps culture within an organization cannot happen without a plan on how to form the culture by influencing behavior. You can't 'build' a culture, it is formed through the actions and character of the organization. The ‘Continuous What?’ DevOps model maps out the strategic high level milestones towards reaching the goal of automated delivery of features and product needs in a timely and repeatable fashion. Each of these high-level milestones are then broken down into pipelines that are iterated to achieve the tactical goals that HATech works with our clients to define throughout any engagement.
A key component of the journey through the model from Continuous Integration, to Continuous Deployment to Continuous Delivery, is mapping the DevOps Maturity baseline at the beginning of the process. HATech carries this out during the Discovery phase and then works with the client to put in place a skills development plan to bring the team up to the required maturity level across all measurements at the appropriate stage in the process.
During a typical 5-day Discovery Engagement, we offer a ½ day free pre-engagement consultation which can be carried out by phone but we always prefer to do on-site with the client. This is a great opportunity for the client to evaluate our approach and validate that it fits with their business needs before any commitment to proceed.
During the 5 day engagement a senior HATech consultant/architect will spend a minimum of 3 days on-site conducting assessments, interviews and an exploratory dive into the client’s applications and development pipeline.
The remaining two days will be spend remotely documenting the written assessment, DevOps maturity ratings and an evaluation, timeframe and budget estimates for a starter project identified with the client.
At the end of this short engagement, the client will have a clear understanding of:
How we work
Where key improvements can be made in their organisation (culture, people, processes & technology)
What immediate next steps they can take to deliver maximum value within the quickest timeframe
How long the next phase in their transformation will take and how much it will cost
Where HATech can add value, bridge gaps and provide a tailored DevOps service to support ongoing projects
SheKnows Media is a leading women’s lifestyle media platform with 92 million unique visitors every month. That puts them number 26 in the top 50 U.S Digital Media Properties (according to comScore.com) just ahead of Pinterest.
When SheKnows engaged HATech, the challenge was to migrate their VM-based environment from Joyent to AWS.
As part of this ongoing engagement, HATech designed and implemented a development & AWS deployment pipeline, heavily utilizing Docker container services with automation to push from dev-test through to production. A key feature of the pipeline delivered is that SheKnows developers can run the exact same containers locally as they run in production.
HATech completed the migration of SheKnows first application to AWS within 10 days, with customers experienced zero downtime during the switchover.
SheKnows have benefited from significant infrastructure costs savings as well as large resource savings due to the deployment pipeline automation delivered by HATech.
HATech continues to provide DevOps services to SheKnows for migration of additional applications to Docker + AWS in addition to providing a leadership advisory role in sprint planning, staff augmentation and monitoring & alerting for production AWS environments.
Exactuals are a rapidly growing software-as-a-service startup whose platform automates residuals payments in the media and entertainment industry.
When Exactuals engaged HATech, the challenge was to replace a failing deployment pipeline for AWS.
As part of the engagement, HATech replaced a complex Puppet pipeline that after 18 months could not deploy reliably to AWS. We designed and implemented a One-Click Jenkins deployment pipeline solution that orchestrated 28 microservices that we re-architected to utilize Docker containers. Using our pipeline, Exactuals developers were able to easily spin up entire environment stacks both locally and on AWS.
HATech completed the first automated release of Exactuals’ application on AWS within 5 days, with the actual upgrade process taking only 30 seconds to complete.
Exactuals have benefited from HATech unblocking critical development and deployment pipelines as well as enabling them to focus on innovating their products, rather than managing upgrade deployments.
HATech continues to provide DevOps services to Exactuals by providing a leadership advisory role in sprint planning and staff augmentation engineering & automation services.
Wedgies.com is an online Polling and Survey platform that lets users of all sizes conduct polls and count votes in real-time. Their SaaS offering is used for school elections for class president, right up to the State of the Union address in 2015, 2015 GOP nominations and the Wall Street Journal.
When Wedgies engaged HATech, the challenge was to help them with the migration from ‘outage-prone-under-high-load’ Heroku environment to AWS and also to provide guidance on how to improve their DevOps understanding and patterns, specifically related to AWS.
Covering only a 7 day engagement, HATech stood up Wedgies’ first app in AWS in only 3 days with 20 minutes downtime for the switchover due mainly to switching their production MongoDB database from Compose to AWS.
During the rest of the engagement period, HATech also helped Wedgies with embedding a DevOps culture, rearchitecting to Docker, building deployment automation and carried out self-reliance training before handover.
Since HATech moved Wedgies to AWS they have not had a single major outage. Wedgies credit HATech’s guidance around adopting appropriate AWS prescribed architectures as contributing to this success metric. HATech’s approach to adopting a DevOps Culture is credited with promoting the right approach towards automation within the Wedgies DevOps team.