This document provides an overview of emerging machine learning architectures, including cloud, edge, fog, and mist computing. It discusses the timeline of remote and machine learning computing from early cloud computing to current edge and fog approaches. The need for edge computing to address latency issues for applications like augmented reality and face recognition is explained. Key aspects of fog computing like its role in scalably extending cloud computing to network edges are covered. The document also provides an example of implementing deep learning for an IoT video recognition application across edge and cloud resources.
Application Delivery Platform Towards Edge Computing - Bukhary IkhwanOpenNebula Project
Edge computing pushes applications and data processing closer to data sources like IoT devices to enable faster results, real-time analytics, and better decision making. Docker is well-suited for application delivery in edge computing due to its lightweight containers that have a small footprint and fast start times. A demo showed containers for a learning management system deploying in seconds versus minutes for virtual machines. Offloading an ETL application to edge resources also significantly reduced bandwidth usage and processing time compared to alternatives that transferred all data to the cloud. Docker's portability and layered images make it a good fit for distributed application delivery in edge computing environments.
The document discusses the private cloud architecture being implemented at the University of the Witwatersrand. It outlines plans to build a private cloud infrastructure using open source technologies like OpenStack, Fedora, iRODS and Zimbra. The cloud will provide scalable compute and storage resources along with hosted services and a digital archive. Key steps are identifying support staff, collaborating with technology partners, and having the initial infrastructure in place by mid-November.
Hybrid Cloud: OpenStack and Other ApproachesMirantis
On April 4, 2014, OpenStack:Now editor Nick Chase presented this talk at Great Wide Open in Atlanta, GA. It discusses the ideas behind Hybrid Cloud and some possible ways to implement it.
This document discusses cloud storage, including what it is, how it works, the benefits for businesses, and costs for users. Cloud storage involves storing digital data across multiple servers, often in different locations, that are managed by hosting companies. Key benefits include data being available from any device, automatic updates, disaster recovery if physical assets are destroyed, and scalability without needing more physical storage space. Potential disadvantages include security concerns about handing data over externally and risks of cloud service providers temporarily losing data.
This document discusses Microsoft's private cloud solutions, including Hyper-V for infrastructure as a service (IaaS) and Windows Azure Appliance for platform as a service (PaaS). It provides an overview of what a private cloud is and the benefits it offers over public clouds. It also outlines Microsoft's approach to building private clouds using Windows Server 2008 R2 Hyper-V for virtualization, System Center for management, and the Windows Azure Appliance for PaaS capabilities. Resources and guidance are provided for organizations looking to build their own private clouds.
Delivering IaaS with Open Source SoftwareMark Hinkle
Mark Hinkle presented on delivering Infrastructure-as-a-Service (IaaS) using open source software. He discussed various open source tools for building cloud computing including hypervisors like KVM and Xen, object storage solutions like OpenStack Swift, and automation/orchestration tools like CloudStack and OpenStack. Hinkle emphasized that open source solutions provide many advantages for cloud computing including lower costs, collaboration, and avoidance of vendor lock-in. He also covered management tools for private clouds and highlighted the importance of automation.
Edge computing is a distributed computing architecture that processes data closer to where it is generated, at the edge of the network, rather than sending all data to centralized cloud data centers for processing. It provides benefits like increased speed and reliability, reduced latency, and better security compared to cloud computing. Edge computing is well-suited for applications in smart cities, manufacturing, healthcare, augmented reality, and AI assistants. Future directions for edge computing include improved edge-to-cloud data exchange, common data exchange between edge devices, streaming and batch data analytics, and cloud-based deployments of edge applications.
This document provides an overview of emerging machine learning architectures, including cloud, edge, fog, and mist computing. It discusses the timeline of remote and machine learning computing from early cloud computing to current edge and fog approaches. The need for edge computing to address latency issues for applications like augmented reality and face recognition is explained. Key aspects of fog computing like its role in scalably extending cloud computing to network edges are covered. The document also provides an example of implementing deep learning for an IoT video recognition application across edge and cloud resources.
Application Delivery Platform Towards Edge Computing - Bukhary IkhwanOpenNebula Project
Edge computing pushes applications and data processing closer to data sources like IoT devices to enable faster results, real-time analytics, and better decision making. Docker is well-suited for application delivery in edge computing due to its lightweight containers that have a small footprint and fast start times. A demo showed containers for a learning management system deploying in seconds versus minutes for virtual machines. Offloading an ETL application to edge resources also significantly reduced bandwidth usage and processing time compared to alternatives that transferred all data to the cloud. Docker's portability and layered images make it a good fit for distributed application delivery in edge computing environments.
The document discusses the private cloud architecture being implemented at the University of the Witwatersrand. It outlines plans to build a private cloud infrastructure using open source technologies like OpenStack, Fedora, iRODS and Zimbra. The cloud will provide scalable compute and storage resources along with hosted services and a digital archive. Key steps are identifying support staff, collaborating with technology partners, and having the initial infrastructure in place by mid-November.
Hybrid Cloud: OpenStack and Other ApproachesMirantis
On April 4, 2014, OpenStack:Now editor Nick Chase presented this talk at Great Wide Open in Atlanta, GA. It discusses the ideas behind Hybrid Cloud and some possible ways to implement it.
This document discusses cloud storage, including what it is, how it works, the benefits for businesses, and costs for users. Cloud storage involves storing digital data across multiple servers, often in different locations, that are managed by hosting companies. Key benefits include data being available from any device, automatic updates, disaster recovery if physical assets are destroyed, and scalability without needing more physical storage space. Potential disadvantages include security concerns about handing data over externally and risks of cloud service providers temporarily losing data.
This document discusses Microsoft's private cloud solutions, including Hyper-V for infrastructure as a service (IaaS) and Windows Azure Appliance for platform as a service (PaaS). It provides an overview of what a private cloud is and the benefits it offers over public clouds. It also outlines Microsoft's approach to building private clouds using Windows Server 2008 R2 Hyper-V for virtualization, System Center for management, and the Windows Azure Appliance for PaaS capabilities. Resources and guidance are provided for organizations looking to build their own private clouds.
Delivering IaaS with Open Source SoftwareMark Hinkle
Mark Hinkle presented on delivering Infrastructure-as-a-Service (IaaS) using open source software. He discussed various open source tools for building cloud computing including hypervisors like KVM and Xen, object storage solutions like OpenStack Swift, and automation/orchestration tools like CloudStack and OpenStack. Hinkle emphasized that open source solutions provide many advantages for cloud computing including lower costs, collaboration, and avoidance of vendor lock-in. He also covered management tools for private clouds and highlighted the importance of automation.
Edge computing is a distributed computing architecture that processes data closer to where it is generated, at the edge of the network, rather than sending all data to centralized cloud data centers for processing. It provides benefits like increased speed and reliability, reduced latency, and better security compared to cloud computing. Edge computing is well-suited for applications in smart cities, manufacturing, healthcare, augmented reality, and AI assistants. Future directions for edge computing include improved edge-to-cloud data exchange, common data exchange between edge devices, streaming and batch data analytics, and cloud-based deployments of edge applications.
How private cloud is better than public cloudAbhi Roy
The document compares private and public clouds based on various parameters such as governance, speed and agility, risk management, compliance, culture, and economics. Private clouds offer superior control over data and security since workloads operate within an organization's firewall. However, public clouds have greater scalability and flexibility since resources are shared, while private clouds have limited hardware sharing and scalability. The appropriate cloud solution depends on an application's security needs and category.
The document provides an overview of cloud computing, including definitions of cloud computing, deployment and service models, advantages and benefits, security considerations, virtualization, migration strategies for moving applications and workloads to the cloud, developing applications for the cloud, and future trends in cloud computing. It also includes descriptions of major cloud providers like AWS, Azure, and GCP.
Citrix Cloud Works with...the new it realityCitrix
The document discusses considerations for choosing a cloud architecture and how Citrix CloudPlatform addresses them. It recommends having a roadmap for cloud adoption, understanding application requirements, choosing a flexible platform, avoiding lock-in, and not turning cloud building into a complex project. Citrix CloudPlatform provides a complete solution for building private Infrastructure as a Service clouds that supports both traditional and cloud-native workloads running reliably on a common platform. It allows choosing best-of-breed components like hypervisors, storage types, and networks.
Cloud computing allows companies and individuals to access software and storage over the internet. It offers cost savings through reduced hardware/software costs and easier deployment. Cloud computing services are growing rapidly and providing capabilities that did not previously exist. In the future, cloud computing may become the standard form of computing as more services move online.
Cloud computing is a computing paradigm that delivers resources as a service over the internet. It enables scalable, on-demand access to shared computing resources like networks, servers, storage, applications and services. This document discusses cloud computing concepts and related technologies like virtualization, containers, and open source cloud platforms. It provides overviews of infrastructure as a service (IaaS) platforms like Eucalyptus, OpenStack and OpenNebula that can be used to build private clouds and leverage cloud technologies. The document also compares these open source cloud platforms based on their origins, architectures, hypervisor support, operating system support and other features.
En dynamisk infrastruktur stiller krav om hybride løsninger med et centraliseret system management. Derfor udgør IBM System z et væsentligt element i en Cloud-løsning. Lær hvordan, man håndterer en dynamisk infrastruktur i skyen.
Læs mere her: bit.ly/softwaredagsystemz3
This document discusses various topics related to cloud technologies. It begins with innovations enabled by cloud computing, such as artificial intelligence, smart cities, driverless cars, and the internet of things. It then defines cloud computing and describes its key characteristics, service models (infrastructure as a service, platform as a service, software as a service), and deployment models (public, private, hybrid). The document outlines advantages and disadvantages of cloud computing, as well as trends like edge computing and opportunities for careers as cloud architects. It also touches on cloud forensics, statistics, and some interesting facts about cloud data storage and usage.
This document is a seminar report on cloud computing submitted by Binesh Kr. Singh in partial fulfillment of a master's degree. It defines cloud computing, discusses different cloud service models including SaaS, IaaS, PaaS, and deployment models. It covers advantages like reduced costs, accessibility, and flexibility. Disadvantages discussed include security, vendor lock-in, and downtime. Examples are provided for each cloud service model. The report concludes that cloud computing is transforming IT and businesses can realize value through proper planning and migration services.
The document outlines different cloud deployment and service models. Public clouds use shared infrastructure while private clouds have dedicated infrastructure. It notes opportunities in public clouds around innovation, economies of scale, and speed to market but also perceived risks regarding security and privacy. It shows infrastructure as a service (IaaS), software as a service (SaaS), and business process as a service (BPaaS) as common cloud service models.
cloud computing means "a type of Internet-based computing," where different services such as servers, storage and applications are delivered to an organization's computers and devices through the Internet
Intro to cloud computing — MegaCOMM 2013, JerusalemReuven Lerner
What is cloud computing? This is an introduction that I gave at MegaCOMM 2013, a conference for technical writers in Jerusalem. The talk describes how the combination of Internet access, virtualization, and open source have made computing a utility that we can turn on and off at will -- similar in some ways to electricity, water, and other utilities with which we're familiar.
Design and inplementation of hybrid cloud computing architecture based on clo...aish006
This slide is prepared by G.Aishwarya of Global Academy Of Technology, Bangalore, under the guidance of Miss. Gopika P(Asst. Professor), Global Academy Of Technology on 04/05/16, as part of 8th Semester, Technical Seminar of VTU curriculum for Computer Science and Engineering Department for 2010 Scheme.
Hybrid cloud uses a mix of on-premises, private cloud and public cloud services. It gives businesses flexibility to move workloads between private and public clouds as needs change. Hybrid cloud offers more options than solely using public or private clouds. It allows choosing where to deploy applications and tools based on timeliness and resources. Hybrid cloud also lets organizations continue using existing private infrastructure while exploring public cloud. It provides a way to securely store sensitive data on-premises while putting less important data in public cloud.
Do you want to know what is cloud computing? here you can learn history of cloud computing, application of cloud computing. this is the best ppt for Cloud computing beginners.
This document discusses NetApp's approach for helping customers connect hybrid data centers and move data seamlessly between on-premises and cloud environments. It introduces NetApp's Universal Data Platform which features Clustered Data ONTAP, the ability to dynamically move data between clouds, and extensive choice for customers in where and how they deploy data. The platform allows customers to create a cloud data fabric that connects various environments.
The document discusses cloud computing standards and defines cloud computing. It notes that cloud computing converges many technologies and represents a leap in capabilities. Standards will be important to ensure interoperability between proprietary and standards-based clouds. The document proposes that the US government establish minimum standards and architecture to enable agencies to create interoperable cloud capabilities through a federal cloud infrastructure. This would promote standardization, security, and application portability across agency clouds without limiting innovation.
This document discusses firewalls, including their definition, history, types, and purposes. A firewall is a program or hardware device that filters network traffic between the internet and an internal network based on a set of security rules. There are different types of firewalls, including packet filtering routers, application-level gateways, and circuit-level gateways. Firewalls aim to restrict network access and protect internal systems by only allowing authorized traffic according to a security policy.
OpenStack NFV Edge computing for IOT microservicesopenstackindia
This document discusses using OpenStack and OpenShift for IoT and edge computing. It proposes a three-tier IoT architecture with devices, intelligent gateways for real-time processing at the edge, and data centers. OpenShift allows for scalable microservices deployment across this architecture. OpenStack provides the virtualization infrastructure for NFV edge computing with the intelligent gateways. The combination provides a platform for applications like NB-IoT from the edge to the cloud.
Edge computing pushes applications and data processing closer to data sources like IoT devices to enable low latency and real-time insights. Docker containers are well-suited for edge computing due to their small size, fast deployment, and ability to run on resource-constrained edge devices. A demo showed containers for a learning management system deployed in seconds at an edge location versus minutes for virtual machines. Offloading an ETL application to edge resources also significantly reduced bandwidth usage versus processing in the cloud. Docker provides a lightweight container-based platform to efficiently deliver and manage applications at the edge.
How private cloud is better than public cloudAbhi Roy
The document compares private and public clouds based on various parameters such as governance, speed and agility, risk management, compliance, culture, and economics. Private clouds offer superior control over data and security since workloads operate within an organization's firewall. However, public clouds have greater scalability and flexibility since resources are shared, while private clouds have limited hardware sharing and scalability. The appropriate cloud solution depends on an application's security needs and category.
The document provides an overview of cloud computing, including definitions of cloud computing, deployment and service models, advantages and benefits, security considerations, virtualization, migration strategies for moving applications and workloads to the cloud, developing applications for the cloud, and future trends in cloud computing. It also includes descriptions of major cloud providers like AWS, Azure, and GCP.
Citrix Cloud Works with...the new it realityCitrix
The document discusses considerations for choosing a cloud architecture and how Citrix CloudPlatform addresses them. It recommends having a roadmap for cloud adoption, understanding application requirements, choosing a flexible platform, avoiding lock-in, and not turning cloud building into a complex project. Citrix CloudPlatform provides a complete solution for building private Infrastructure as a Service clouds that supports both traditional and cloud-native workloads running reliably on a common platform. It allows choosing best-of-breed components like hypervisors, storage types, and networks.
Cloud computing allows companies and individuals to access software and storage over the internet. It offers cost savings through reduced hardware/software costs and easier deployment. Cloud computing services are growing rapidly and providing capabilities that did not previously exist. In the future, cloud computing may become the standard form of computing as more services move online.
Cloud computing is a computing paradigm that delivers resources as a service over the internet. It enables scalable, on-demand access to shared computing resources like networks, servers, storage, applications and services. This document discusses cloud computing concepts and related technologies like virtualization, containers, and open source cloud platforms. It provides overviews of infrastructure as a service (IaaS) platforms like Eucalyptus, OpenStack and OpenNebula that can be used to build private clouds and leverage cloud technologies. The document also compares these open source cloud platforms based on their origins, architectures, hypervisor support, operating system support and other features.
En dynamisk infrastruktur stiller krav om hybride løsninger med et centraliseret system management. Derfor udgør IBM System z et væsentligt element i en Cloud-løsning. Lær hvordan, man håndterer en dynamisk infrastruktur i skyen.
Læs mere her: bit.ly/softwaredagsystemz3
This document discusses various topics related to cloud technologies. It begins with innovations enabled by cloud computing, such as artificial intelligence, smart cities, driverless cars, and the internet of things. It then defines cloud computing and describes its key characteristics, service models (infrastructure as a service, platform as a service, software as a service), and deployment models (public, private, hybrid). The document outlines advantages and disadvantages of cloud computing, as well as trends like edge computing and opportunities for careers as cloud architects. It also touches on cloud forensics, statistics, and some interesting facts about cloud data storage and usage.
This document is a seminar report on cloud computing submitted by Binesh Kr. Singh in partial fulfillment of a master's degree. It defines cloud computing, discusses different cloud service models including SaaS, IaaS, PaaS, and deployment models. It covers advantages like reduced costs, accessibility, and flexibility. Disadvantages discussed include security, vendor lock-in, and downtime. Examples are provided for each cloud service model. The report concludes that cloud computing is transforming IT and businesses can realize value through proper planning and migration services.
The document outlines different cloud deployment and service models. Public clouds use shared infrastructure while private clouds have dedicated infrastructure. It notes opportunities in public clouds around innovation, economies of scale, and speed to market but also perceived risks regarding security and privacy. It shows infrastructure as a service (IaaS), software as a service (SaaS), and business process as a service (BPaaS) as common cloud service models.
cloud computing means "a type of Internet-based computing," where different services such as servers, storage and applications are delivered to an organization's computers and devices through the Internet
Intro to cloud computing — MegaCOMM 2013, JerusalemReuven Lerner
What is cloud computing? This is an introduction that I gave at MegaCOMM 2013, a conference for technical writers in Jerusalem. The talk describes how the combination of Internet access, virtualization, and open source have made computing a utility that we can turn on and off at will -- similar in some ways to electricity, water, and other utilities with which we're familiar.
Design and inplementation of hybrid cloud computing architecture based on clo...aish006
This slide is prepared by G.Aishwarya of Global Academy Of Technology, Bangalore, under the guidance of Miss. Gopika P(Asst. Professor), Global Academy Of Technology on 04/05/16, as part of 8th Semester, Technical Seminar of VTU curriculum for Computer Science and Engineering Department for 2010 Scheme.
Hybrid cloud uses a mix of on-premises, private cloud and public cloud services. It gives businesses flexibility to move workloads between private and public clouds as needs change. Hybrid cloud offers more options than solely using public or private clouds. It allows choosing where to deploy applications and tools based on timeliness and resources. Hybrid cloud also lets organizations continue using existing private infrastructure while exploring public cloud. It provides a way to securely store sensitive data on-premises while putting less important data in public cloud.
Do you want to know what is cloud computing? here you can learn history of cloud computing, application of cloud computing. this is the best ppt for Cloud computing beginners.
This document discusses NetApp's approach for helping customers connect hybrid data centers and move data seamlessly between on-premises and cloud environments. It introduces NetApp's Universal Data Platform which features Clustered Data ONTAP, the ability to dynamically move data between clouds, and extensive choice for customers in where and how they deploy data. The platform allows customers to create a cloud data fabric that connects various environments.
The document discusses cloud computing standards and defines cloud computing. It notes that cloud computing converges many technologies and represents a leap in capabilities. Standards will be important to ensure interoperability between proprietary and standards-based clouds. The document proposes that the US government establish minimum standards and architecture to enable agencies to create interoperable cloud capabilities through a federal cloud infrastructure. This would promote standardization, security, and application portability across agency clouds without limiting innovation.
This document discusses firewalls, including their definition, history, types, and purposes. A firewall is a program or hardware device that filters network traffic between the internet and an internal network based on a set of security rules. There are different types of firewalls, including packet filtering routers, application-level gateways, and circuit-level gateways. Firewalls aim to restrict network access and protect internal systems by only allowing authorized traffic according to a security policy.
OpenStack NFV Edge computing for IOT microservicesopenstackindia
This document discusses using OpenStack and OpenShift for IoT and edge computing. It proposes a three-tier IoT architecture with devices, intelligent gateways for real-time processing at the edge, and data centers. OpenShift allows for scalable microservices deployment across this architecture. OpenStack provides the virtualization infrastructure for NFV edge computing with the intelligent gateways. The combination provides a platform for applications like NB-IoT from the edge to the cloud.
Edge computing pushes applications and data processing closer to data sources like IoT devices to enable low latency and real-time insights. Docker containers are well-suited for edge computing due to their small size, fast deployment, and ability to run on resource-constrained edge devices. A demo showed containers for a learning management system deployed in seconds at an edge location versus minutes for virtual machines. Offloading an ETL application to edge resources also significantly reduced bandwidth usage versus processing in the cloud. Docker provides a lightweight container-based platform to efficiently deliver and manage applications at the edge.
IoT Microservices at the Edge with Eclipse ioFogKilton Hopkins
Learn how Eclipse ioFog open-source Fog Computing lets you create microservices for the Internet of Things and run them in any physical location you desire.
John Clegg gave a talk at ScaleConf about making performance a feature. He discussed how the Impossible Mission Force team at Xero focuses on performance as part of their development process. Clegg explained that focusing on performance leads to better customer satisfaction, conversion rates, and costs. However, premature optimization can be evil. The key is optimizing critical parts of the system as products and customers grow over time. Clegg provided tips on getting business buy-in, having developers take ownership of performance, using education and culture change, and metrics to make performance a feature.
In this presentation we will talk about the Microservices approach and how it can be implemented in IoT ecosystem.
The microservice architectural style is an approach to developing a single application as a suite of small services, each running in its own process and communicating with lightweight mechanisms, often an HTTP resource API.
A possible solution to easily control the IoT systems is to create an intelligent platform using a microservices architecture.
Cloud Computing as Innovation Hub - Mohammad Fairus KhalidOpenNebula Project
Cloud computing provides an innovation platform beyond just cost savings. New technologies like containers, microservices, and APIs enable collaboration and mobility. Applications are designed to be stateless, transactional, and deployed atomically. This paradigm shift supports real-time scalability, insights from big data, and interconnected devices and people. Use cases include neighborhood watch, emergency response, and open data platforms. Cloud is impacted by mobility, social media, and the internet of things, moving away from silos towards collaboration across applications, data, and people.
It is a presentation for acamedia talk about cloud computing for intelligent video surveillance, i.e. VSaaS, given in 2010. Some of our research results are also presented in this presentation.
Typical disaster recovery plans leverage backup and/or replication to move data out of the primary data center and to a secondary site. Historically, the secondary site is another data center that the organization maintains. But now, companies are looking to the cloud to become a secondary site, leveraging it as a backup target and even a place to start their applications in the event of a failure. The problem with this approach is that it merely simulates a legacy design and presents some significant recovery challenges.
This document provides instructions on various Docker commands and concepts. It begins with definitions of Docker and the differences between VMs and Docker containers. It then covers topics like installing Docker, finding Docker images and versions, building images with Dockerfiles, running containers with commands like docker run, and managing images and containers.
This document discusses how communication service providers can leverage edge cloud computing and virtualized transport (T-NFV) to offer new services and generate additional revenue streams. It notes that many emerging services are moving to or being created in the cloud but require communications connectivity. Edge cloud computing addresses this by situating services closer to end users with lower latency requirements. The document outlines how T-NFV can virtualize transport functions to enable fast, automated delivery of edge cloud services while differentiating the CSP. This virtualized transport is presented as key to enabling edge computing opportunities across sectors like IoT, smart cities, healthcare and more.
IoT World Forum Press Conference - 10.14.2014Bessie Wang
1. The document summarizes Cisco's Internet of Things (IoT) World Forum that took place in Chicago in October 2014.
2. It discusses Cisco's strategy and focus areas around IoT, including IoT infrastructure, vertical solutions, services, investment, and partner ecosystem.
3. It also highlights announcements around new IoT products and technologies from Cisco at the forum, such as new platforms and applications for Fog computing and improved IoT security capabilities.
Live migration in Mobile Edge Computing (MEC)Andy Jones
In cellular networks, M stands for Mobile. Mobility in MEC demands that applications survive handover between MEC server nodes deployed at the network edge. The sweet-spot for MEC server deployments is at aggregation sites serving clusters of cells. To reduce latency and bandwidth overheads to perform a realtime live migration at MEC Server-to-MEC Server handover requires a deconstruction of the application into an idle/stateless portion and a (number of) per-session stateful portion(s) and involves a multi-step pre-emptive approach to transferring the application data to the new serving MEC Server. The pre-emptive instantiation of the application in "likely handover targets" (i.e. neighbouring MEC Servers) could leverage SON techniques such as automatic neighbour relations. Furthermore, container frameworks will further reduce these overheads compared with approaches based on VM migration.
Why edge computing is critical to hybrid IT and cloud successClearSky Data
There's too much data growth to keep it all local, but sending data to the cloud can introduce performance, latency and access issues. Edge computing alleviates all three.
Accelerating Real-Time Analytics Insights Through Hadoop Open Source EcosystemDataWorks Summit
This document discusses accelerating real-time analytics through the Hadoop open source ecosystem. It highlights Intel's contributions to open source projects like Apache Hadoop and Apache Spark to drive mainstream adoption of advanced analytics. Real-time analytics can provide insights using data as it arrives rather than after it is stored. The document explores use cases for real-time analytics in healthcare, social media, and security and how Intel is working to accelerate solutions in these domains using its data platform and open source technologies.
Understanding Akka Streams, Back Pressure, and Asynchronous ArchitecturesLightbend
The term 'streams' has been getting pretty overloaded recently–it's hard to know where to best use different technologies with streams in the name. In this talk by noted hAkker Konrad Malawski, we'll disambiguate what streams are and what they aren't, taking a deeper look into Akka Streams (the implementation) and Reactive Streams (the standard).
You'll be introduced to a number of real life scenarios where applying back-pressure helps to keep your systems fast and healthy at the same time. While the focus is mainly on the Akka Streams implementation, the general principles apply to any kind of asynchronous, message-driven architectures.
Microservices: The Future-Proof Framework for IoTCapgemini
Dr Michael Capone Principal Analyst - Capgemini
The data generated by IoT-enabled machines, vehicles and devices can provide companies with insight into user behaviour that they can use to create a personal connection with their customers. Companies are, therefore, scrambling to implement IoT systems in order to generate, capture, protect, and analyse this valuable data. But the insights created are only valuable when they trigger consequent decisions and timely actions. There are many potential users of IoT data such as marketing, sales, held service, product
development, customer support, operations, and supply chain not to mention external users like vendors and partners. Each user group needs to be able to access and select different data and apply different logic and analytic approaches to perform specific tasks.
Furthermore, each group can have unique usability requirements. As companies become more IoT mature and start to plan for “data actionability,” the disadvantages of a homogenous IoT stack or departmental systems become obvious. The best option from a data quality, user acceptance, and ROI perspective is a microservices IoT platform.
O'Reilly Webcast: Architecting Applications For The CloudO'Reilly Media
This presentation analyzes aspects of the Amazon EC2 IaaS cloud environment that differ from a traditional data center and introduces general best practices for ensuring data privacy, storage persistence, and reliable DBMS backup. Presented by Jorge Noa, CTO of Hyperstratus
This document provides an overview of network attached storage (NAS). It discusses the origins of NAS from the 1970s development of Ethernet and file sharing protocols. Key NAS concepts are explained, including the differences between block-level and file-based data access. Common NAS techniques such as file systems, shares, authentication, and data protection methods like snapshots are also outlined. The document aims to give readers foundational knowledge about NAS technologies.
Building Private Clouds for HPC with OpenNebula: Reference Deployments & Less...Ruben S. Montero
This document discusses using private clouds for high-performance computing and describes deployments at CERN and Fermilab. It outlines two approaches for HPC and IaaS clouds and lessons learned, including the need to automate and scale deployments, ensure interoperability, and address scientists' requirements for customization and access. It also proposes a hybrid grid/cloud approach under the StratusLab project to achieve agility while maintaining federation and a uniform user experience.
The document summarizes an AWS user group presentation by Shaimaa Esmaeil on AWS101. The presentation introduced cloud computing concepts, AWS global infrastructure and services, and demonstrated EC2 and S3. It discussed on-premises vs cloud, cloud models (IaaS, PaaS, SaaS), AWS regions and availability zones. It provided overviews of EC2 instances, AMIs, types, EBS, security groups and S3 buckets and objects. Useful training and practice exam resources were also shared.
This document provides an overview and summary of a presentation on cloud computing fundamentals and career opportunities. The presentation covers topics like the basics of cloud computing, advantages of moving to the cloud, major AWS services, cloud deployment models and service models, characteristics of cloud computing like scalability and elasticity, and career opportunities in cloud computing including learning paths and certifications. It also discusses latest technology trends and takes questions from the audience. The presentation aims to provide a better understanding of cloud computing, opportunities in the field, and how to achieve cloud certifications.
Cloud Migration, Application Modernization and Security for PartnersAmazon Web Services
As AWS continues to expand, enterprise customers are increasingly looking to our partner ecosystem to assist in migrating their workloads to the cloud. This session describes the challenges, lessons learned, and best practices for large-scale application migrations. We will use real examples from our consulting partners and AWS Professional Services to illustrate how to move workloads to the cloud while modernizing the associated applications to take advantage of the unique benefits of AWS. We will also dive into how to use an array of AWS services and features to improve customers' security posture as they migrate and once they are up and running in the cloud.
Morgan Hill offers comprehensive training for the corporate, enterprise architect on the Amazon Web Services (AWS) platform. This AWS training is delivered by experienced architects used to operating in a corporate infrastructure environment.
This document summarizes Darren Shepherd's presentation about Stampede.io, a hybrid IaaS/Docker orchestration platform he developed. It can run both VMs and containers consistently. Stampede.io provides portable cloud infrastructure, including compute, storage, and networking capabilities using Linux technologies. Darren discussed how Stampede.io could help normalize the infrastructure market by reducing reliance on large cloud providers if it can tackle portable storage and networking challenges for containers. He demonstrated a Stampede deployment across Digital Ocean nodes that launched over 127,000 containers reliably.
Cloud Migration, Application Modernization, and Security Tom Laszewski
As AWS continues to expand, enterprise customers are looking to our partner ecosystem to assist in migrating their workloads to the cloud. This session describes the challenges, lessons learned and best practices for large scale application migrations. We will use real examples from our consulting partners and AWS Professional Services to illustrate how to move workloads to the cloud while modernizing the associated applications to take advantage of AWS’ unique benefits. We will also dive into how to use an array of AWS services and features to improve a customer’s security posture as they are migrating and once they are up and running in the cloud
This document discusses how VIA Technologies used AWS to address challenges from the COVID-19 pandemic for their 6nm IC design project. The pandemic impacted their project schedule unexpectedly and required work from home. AWS helped by quickly building a secure EDA infrastructure that improved productivity and may have allowed their project timeline to be accelerated. It provided proven EDA execution, smooth data transfer, and ongoing cost monitoring benefits. This case demonstrated how the cloud can provide new approaches for IC design projects during difficult situations.
Cloud Migration, Application Modernization and Security for PartnersAmazon Web Services
As AWS continues to expand, enterprise customers are increasingly looking to our partner ecosystem to assist in migrating their workloads to the cloud. This session describes the challenges, lessons learned and best practices for large scale application migrations. We will use real examples from our consulting partners and AWS Professional Services to illustrate how to move workloads to the cloud while modernizing the associated applications to take advantage of AWS’ unique benefits. We will also dive into how to use an array of AWS services and features to improve a customer’s security posture as they are migrating and once they are up and running in the cloud.
For our next ArcReady, we will explore a topic on everyone’s mind: Cloud computing. Several industry companies have announced cloud computing services . In October 2008 at the Professional Developers Conference, Microsoft announced the next phase of our Software + Services vision: the Azure Services Platform. The Azure Services Platforms provides a wide range of internet services that can be consumed from both on premises environments or the internet.
Session 1: Cloud Services
In our first session we will explore the current state of cloud services. We will then look at how applications should be architected for the cloud and explore a reference application deployed on Windows Azure. We will also look at the services that can be built for on premise application, using .NET Services. We will also address some of the concerns that enterprises have about cloud services, such as regulatory and compliance issues.
Session 2: The Azure Platform
In our second session we will take a slightly different look at cloud based services by exploring Live Mesh and Live Services. Live Mesh is a data synchronization client that has a rich API to build applications on. Live services are a collection of APIs that can be used to create rich applications for your customers. Live Services are based on internet standard protocols and data formats.
This document outlines an architecture vision that includes business architecture, information architecture, infrastructure architecture, data architecture, integration architecture, and security architecture. It discusses key concepts like scalability, elasticity, converting capital expenditures to operating expenditures, pay per use, availability across data centers, multi-tenant architecture, NoSQL databases, risk models, control frameworks, use cases, and roadmaps. It also provides examples of AWS services that could fulfill various architecture components and needs related to storage, databases, analytics, networking, developer tools, and security.
The document provides an overview of Windows Azure cloud storage. It discusses cloud computing fundamentals and models including infrastructure as a service (IaaS), platform as a service (PaaS), and software as a service (SaaS). It introduces Windows Azure storage services including blobs, tables, queues, and files. It describes features like data replication, storage objects, and durability options. It also provides instructions for using the Azure management portal and C++ SDK to interact with Azure storage.
The Pendulum Swings Back: Converged and Hyperconverged EnvironmentsTony Pearson
The document discusses the history of data storage technologies and how the approach is shifting back towards converged and hyperconverged systems. It provides an overview of converged infrastructure solutions like IBM's VersaStack, which combines Cisco servers and networking equipment with IBM storage systems. The document also summarizes IBM's Storwize and FlashSystem storage platforms which can be used in converged and hyperconverged environments.
🌥️ “Cloud 101” is an event organized by our club's ☁️ Cloud lead to introduce students to the world of cloud computing. The event aims to equip students with the 🔧 skills and 💡 knowledge needed to get started with cloud computing.
👨💼 Host: The event will be hosted by the ☁️ Cloud lead of our club, who has an amazing experience in cloud computing.
🎯 Aim: The event aims to provide an introduction to cloud computing for students who are new to the field.
📚 Topics: The event will cover a range of topics related to cloud computing, such as ☁️ cloud architecture, 🔒 cloud security, ☁️ cloud services, ☁️ cloud deployment, and more.
👥 Activities: In addition to talks and workshops, the event will also feature hands-on activities and interactive sessions, designed to help students get a first-hand experience of working with cloud computing tools and technologies.
🤝 Networking: The event will provide ample opportunities for networking and connecting with like-minded individuals who share a passion for cloud computing.
📖 Prerequisites: No prior knowledge or experience in cloud computing is required to attend the event. The event is open to all students who are curious about the field and willing to learn.
📝 Registration: The event is free of cost and open to all students. However, pre-registration is mandatory to attend the event, as seats are limited.
So, if you want to get started with cloud computing and learn from an experienced ☁️ Cloud lead, join us at Cloud 101 – Your Introduction to Cloud Computing! 🚀
The document provides guidance on cloud architecture best practices for architects. It discusses 7 key lessons: 1) design for failure and nothing fails, 2) loose coupling sets you free, 3) implement elasticity, 4) build security in every layer, 5) don't fear constraints, 6) think parallel, and 7) leverage many storage options. The document uses examples of moving a web architecture to AWS to illustrate applying these lessons around scalability, availability and resilience.
This document provides an overview of building secure cloud architecture. It discusses cloud characteristics and services models like IaaS, PaaS, and SaaS. It also covers the shared responsibility model between providers and customers. Additional topics include compliance requirements, privacy basics, architecting for availability, network separation, application protection, identity and access management, monitoring tools, log management, and containers security. The document aims to educate readers on best practices for securely designing cloud infrastructure and applications.
This document provides an overview of virtualization and cloud computing technologies. It begins with a brief history of computing from mainframes to personal computers and networks. It then discusses how server virtualization and consolidation led to more efficient use of resources and the emergence of data centers. Next, it describes how cloud computing builds upon virtualization by providing on-demand access to computing resources over the internet. It outlines the key characteristics, deployment models, and types of cloud services. Finally, it discusses some advantages and disadvantages of cloud computing.
Refining Your API Design - Architecture and Modeling Learning EventLaunchAny
APIs are a conversation that involves everyone, from developers to end-users and even machine-to-machine. Yet, we can miss the mark when designing an API that delivers on the desired outcomes of the end user. In this talk, James discusses the factors that ensure an API delivers value to the end user. He will explore some techniques on refining your API design before it goes live. He will also explore the challenges of microservices and why they may not be what you think they are. Along the way, we will discuss techniques that can accelerate the API design and delivery process.
Event-based APIs are becoming more popular, enabling developers to craft new integrations and solutions that go beyond the original design of an API. Yet, there remains a challenge: how can teams design thoughtful event-based APIs that are long-lasting, evolvable, and discoverable? This talk will dive into the design practices of event-based APIs, including tips for determining which protocol(s) you should select, design patterns we should apply, and anti-patterns should we avoid. We will also look at how AI and tools such as ChatGPT are starting to shape the next generation of APIs.
Delivered on May 10, 2023 for the EDA Summit
Event-based API Patterns and Practices - AsyncAPI Online ConferenceLaunchAny
This document discusses API design patterns for event-based APIs. It begins with an introduction to the author and overview of popular API styles. It then covers several options for asynchronous API design like webhooks, server-sent events, websockets, and streaming protocols. The remainder discusses specific patterns for event payload design including thin notifications, hypermedia links, schema evolution, and separating internal and external events. It emphasizes putting careful design into event formats as they form a contract like an API.
GlueCon 2019: Beyond REST - Moving to Event-Based APIs and StreamingLaunchAny
For more than a decade, web APIs have replaced the previous generation of web services. Throughout this period of growth, most APIs have been restricted to request-response over HTTP. We are now seeing a move back to eventing with the popularity of webhooks. Additionally, streaming is becoming another option for connecting services, apps, and devices. In this talk, we will look at the opportunities that event-based APIs and streaming offer and how our software architecture is evolving to handle these new styles of API interaction.
Austin API Summit 2019 - APIs, Microservices, and Serverless: The Shape of Th...LaunchAny
A look at the growth of APIs, the influence of microservices and serverless, and the new enterprise API platform stack including API profiles, multiple API styles, and data management
APIStrat Keynote: Lessons in Transforming the Enterprise to an API PlatformLaunchAny
This document outlines lessons from transforming an enterprise to an API platform. It discusses 5 key lessons: 1) developing an API strategy, 2) implementing federated API governance, 3) modernizing architecture and delivery, 4) increasing API adoption, and 5) defining platform processes. The goal is to offer a platform that supports internal developers, public app developers, customers, and third-party approved apps through APIs, streams, and events.
Austin API Summit 2018: Are REST APIs Still Relevant Today?LaunchAny
A look at common API styles available today, a look back at historical API styles, and guidance for selecting the right API styles for your organization. Deep-dive of HTTP, mentioned in the presentation, can be found at: http://bit.ly/power-http
GlueCon 2018: Are REST APIs Still Relevant Today?LaunchAny
A look at common API styles available today, a look back at historical API styles, and guidance for selecting the right API styles for your organization. Deep-dive of HTTP, mentioned in the presentation, can be found at: http://bit.ly/power-http
Lessons in Transforming the Enterprise to an API PlatformLaunchAny
A look at lessons from our recent consulting engagements on why and how enterprises are moving from an API program to an API platform as part of their digital transformation. Includes 5 common practices we see across successful enterprises as they move to an API platform. Recording: https://www.youtube.com/watch?v=Km-mCx0Zbgo&feature=youtu.be
APIStrat 2017: API Design in the Age of Bots, IoT, and VoiceLaunchAny
Our API design should be user-first: a reflection of the kinds of capabilities and outcomes our users expect. New devices and software interaction will change the way we design web APIs. Presented at APIStrat 2017
API:World 2016 - Applying Domain Driven Design to APIs and MicroservicesLaunchAny
Presentation from API:World 2016 that answers the following questions:
How are APIs and microservices related?
How do I figure out how to find the right size for my microservices?
And how do I get there if I have a monolithic architecture?
Moving Toward a Modular Enterprise - All About the API Conference 2016LaunchAny
A look at how APIs and microservices are driving the enterprise toward a more modular, connected approach to software development. Also outlines the key transformation steps used by CIOs and CTOs to address digital transformation and achieve a more modular enterprise.
Designing APIs and Microservices Using Domain-Driven DesignLaunchAny
Presented at GlueCon 2016. Applying good software engineering practices, system design, and domain-driven design for your public APIs and microservices
Applying Domain-Driven Design to APIs and Microservices - Austin API MeetupLaunchAny
This document discusses applying domain-driven design principles to API and microservices architecture. It recommends using an outside-in design approach where the data model is separate from the object model and resource model. Domain-driven design helps identify context boundaries, and microservices require renewed focus on system and API design. Modeling the domain entities, relations, states and events defines the resources exposed by each API. This modular design increases composability and the ability to replace services over time.
APIs Are Forever - How to Design Long-Lasting APIsLaunchAny
Teams often struggle with balancing the complexity of legacy applications, limited time, and limited resources when designing APIs. The result is often the release of less-than-ideal API design that meets the immediate needs of the client but misses opportunities for longer-term value. This talk explores systems design and domain-driven design (DDD) for API design thinking and how to apply this technique to your design process to create a clear, well-designed, long-lasting API. Presented at API Strategy and Practice 2015
API Thinking - How to Design APIs Through Systems DesignLaunchAny
A 5 min discussion about how to improve API design by focusing on domain modeling (to identify entities, relationships, transitions, and events) and systems design (to find the context boundaries for our APIs).
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/how-axelera-ai-uses-digital-compute-in-memory-to-deliver-fast-and-energy-efficient-computer-vision-a-presentation-from-axelera-ai/
Bram Verhoef, Head of Machine Learning at Axelera AI, presents the “How Axelera AI Uses Digital Compute-in-memory to Deliver Fast and Energy-efficient Computer Vision” tutorial at the May 2024 Embedded Vision Summit.
As artificial intelligence inference transitions from cloud environments to edge locations, computer vision applications achieve heightened responsiveness, reliability and privacy. This migration, however, introduces the challenge of operating within the stringent confines of resource constraints typical at the edge, including small form factors, low energy budgets and diminished memory and computational capacities. Axelera AI addresses these challenges through an innovative approach of performing digital computations within memory itself. This technique facilitates the realization of high-performance, energy-efficient and cost-effective computer vision capabilities at the thin and thick edge, extending the frontier of what is achievable with current technologies.
In this presentation, Verhoef unveils his company’s pioneering chip technology and demonstrates its capacity to deliver exceptional frames-per-second performance across a range of standard computer vision networks typical of applications in security, surveillance and the industrial sector. This shows that advanced computer vision can be accessible and efficient, even at the very edge of our technological ecosystem.
"Frontline Battles with DDoS: Best practices and Lessons Learned", Igor IvaniukFwdays
At this talk we will discuss DDoS protection tools and best practices, discuss network architectures and what AWS has to offer. Also, we will look into one of the largest DDoS attacks on Ukrainian infrastructure that happened in February 2022. We'll see, what techniques helped to keep the web resources available for Ukrainians and how AWS improved DDoS protection for all customers based on Ukraine experience
HCL Notes and Domino License Cost Reduction in the World of DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-and-domino-license-cost-reduction-in-the-world-of-dlau/
The introduction of DLAU and the CCB & CCX licensing model caused quite a stir in the HCL community. As a Notes and Domino customer, you may have faced challenges with unexpected user counts and license costs. You probably have questions on how this new licensing approach works and how to benefit from it. Most importantly, you likely have budget constraints and want to save money where possible. Don’t worry, we can help with all of this!
We’ll show you how to fix common misconfigurations that cause higher-than-expected user counts, and how to identify accounts which you can deactivate to save money. There are also frequent patterns that can cause unnecessary cost, like using a person document instead of a mail-in for shared mailboxes. We’ll provide examples and solutions for those as well. And naturally we’ll explain the new licensing model.
Join HCL Ambassador Marc Thomas in this webinar with a special guest appearance from Franz Walder. It will give you the tools and know-how to stay on top of what is going on with Domino licensing. You will be able lower your cost through an optimized configuration and keep it low going forward.
These topics will be covered
- Reducing license cost by finding and fixing misconfigurations and superfluous accounts
- How do CCB and CCX licenses really work?
- Understanding the DLAU tool and how to best utilize it
- Tips for common problem areas, like team mailboxes, functional/test users, etc
- Practical examples and best practices to implement right away
5th LF Energy Power Grid Model Meet-up SlidesDanBrown980551
5th Power Grid Model Meet-up
It is with great pleasure that we extend to you an invitation to the 5th Power Grid Model Meet-up, scheduled for 6th June 2024. This event will adopt a hybrid format, allowing participants to join us either through an online Mircosoft Teams session or in person at TU/e located at Den Dolech 2, Eindhoven, Netherlands. The meet-up will be hosted by Eindhoven University of Technology (TU/e), a research university specializing in engineering science & technology.
Power Grid Model
The global energy transition is placing new and unprecedented demands on Distribution System Operators (DSOs). Alongside upgrades to grid capacity, processes such as digitization, capacity optimization, and congestion management are becoming vital for delivering reliable services.
Power Grid Model is an open source project from Linux Foundation Energy and provides a calculation engine that is increasingly essential for DSOs. It offers a standards-based foundation enabling real-time power systems analysis, simulations of electrical power grids, and sophisticated what-if analysis. In addition, it enables in-depth studies and analysis of the electrical power grid’s behavior and performance. This comprehensive model incorporates essential factors such as power generation capacity, electrical losses, voltage levels, power flows, and system stability.
Power Grid Model is currently being applied in a wide variety of use cases, including grid planning, expansion, reliability, and congestion studies. It can also help in analyzing the impact of renewable energy integration, assessing the effects of disturbances or faults, and developing strategies for grid control and optimization.
What to expect
For the upcoming meetup we are organizing, we have an exciting lineup of activities planned:
-Insightful presentations covering two practical applications of the Power Grid Model.
-An update on the latest advancements in Power Grid -Model technology during the first and second quarters of 2024.
-An interactive brainstorming session to discuss and propose new feature requests.
-An opportunity to connect with fellow Power Grid Model enthusiasts and users.
AppSec PNW: Android and iOS Application Security with MobSFAjin Abraham
Mobile Security Framework - MobSF is a free and open source automated mobile application security testing environment designed to help security engineers, researchers, developers, and penetration testers to identify security vulnerabilities, malicious behaviours and privacy concerns in mobile applications using static and dynamic analysis. It supports all the popular mobile application binaries and source code formats built for Android and iOS devices. In addition to automated security assessment, it also offers an interactive testing environment to build and execute scenario based test/fuzz cases against the application.
This talk covers:
Using MobSF for static analysis of mobile applications.
Interactive dynamic security assessment of Android and iOS applications.
Solving Mobile app CTF challenges.
Reverse engineering and runtime analysis of Mobile malware.
How to shift left and integrate MobSF/mobsfscan SAST and DAST in your build pipeline.
Skybuffer SAM4U tool for SAP license adoptionTatiana Kojar
Manage and optimize your license adoption and consumption with SAM4U, an SAP free customer software asset management tool.
SAM4U, an SAP complimentary software asset management tool for customers, delivers a detailed and well-structured overview of license inventory and usage with a user-friendly interface. We offer a hosted, cost-effective, and performance-optimized SAM4U setup in the Skybuffer Cloud environment. You retain ownership of the system and data, while we manage the ABAP 7.58 infrastructure, ensuring fixed Total Cost of Ownership (TCO) and exceptional services through the SAP Fiori interface.
Introduction of Cybersecurity with OSS at Code Europe 2024Hiroshi SHIBATA
I develop the Ruby programming language, RubyGems, and Bundler, which are package managers for Ruby. Today, I will introduce how to enhance the security of your application using open-source software (OSS) examples from Ruby and RubyGems.
The first topic is CVE (Common Vulnerabilities and Exposures). I have published CVEs many times. But what exactly is a CVE? I'll provide a basic understanding of CVEs and explain how to detect and handle vulnerabilities in OSS.
Next, let's discuss package managers. Package managers play a critical role in the OSS ecosystem. I'll explain how to manage library dependencies in your application.
I'll share insights into how the Ruby and RubyGems core team works to keep our ecosystem safe. By the end of this talk, you'll have a better understanding of how to safeguard your code.
Digital Banking in the Cloud: How Citizens Bank Unlocked Their MainframePrecisely
Inconsistent user experience and siloed data, high costs, and changing customer expectations – Citizens Bank was experiencing these challenges while it was attempting to deliver a superior digital banking experience for its clients. Its core banking applications run on the mainframe and Citizens was using legacy utilities to get the critical mainframe data to feed customer-facing channels, like call centers, web, and mobile. Ultimately, this led to higher operating costs (MIPS), delayed response times, and longer time to market.
Ever-changing customer expectations demand more modern digital experiences, and the bank needed to find a solution that could provide real-time data to its customer channels with low latency and operating costs. Join this session to learn how Citizens is leveraging Precisely to replicate mainframe data to its customer channels and deliver on their “modern digital bank” experiences.
Connector Corner: Seamlessly power UiPath Apps, GenAI with prebuilt connectorsDianaGray10
Join us to learn how UiPath Apps can directly and easily interact with prebuilt connectors via Integration Service--including Salesforce, ServiceNow, Open GenAI, and more.
The best part is you can achieve this without building a custom workflow! Say goodbye to the hassle of using separate automations to call APIs. By seamlessly integrating within App Studio, you can now easily streamline your workflow, while gaining direct access to our Connector Catalog of popular applications.
We’ll discuss and demo the benefits of UiPath Apps and connectors including:
Creating a compelling user experience for any software, without the limitations of APIs.
Accelerating the app creation process, saving time and effort
Enjoying high-performance CRUD (create, read, update, delete) operations, for
seamless data management.
Speakers:
Russell Alfeche, Technology Leader, RPA at qBotic and UiPath MVP
Charlie Greenberg, host
Essentials of Automations: Exploring Attributes & Automation ParametersSafe Software
Building automations in FME Flow can save time, money, and help businesses scale by eliminating data silos and providing data to stakeholders in real-time. One essential component to orchestrating complex automations is the use of attributes & automation parameters (both formerly known as “keys”). In fact, it’s unlikely you’ll ever build an Automation without using these components, but what exactly are they?
Attributes & automation parameters enable the automation author to pass data values from one automation component to the next. During this webinar, our FME Flow Specialists will cover leveraging the three types of these output attributes & parameters in FME Flow: Event, Custom, and Automation. As a bonus, they’ll also be making use of the Split-Merge Block functionality.
You’ll leave this webinar with a better understanding of how to maximize the potential of automations by making use of attributes & automation parameters, with the ultimate goal of setting your enterprise integration workflows up on autopilot.
Have you ever been confused by the myriad of choices offered by AWS for hosting a website or an API?
Lambda, Elastic Beanstalk, Lightsail, Amplify, S3 (and more!) can each host websites + APIs. But which one should we choose?
Which one is cheapest? Which one is fastest? Which one will scale to meet our needs?
Join me in this session as we dive into each AWS hosting service to determine which one is best for your scenario and explain why!
zkStudyClub - LatticeFold: A Lattice-based Folding Scheme and its Application...Alex Pruden
Folding is a recent technique for building efficient recursive SNARKs. Several elegant folding protocols have been proposed, such as Nova, Supernova, Hypernova, Protostar, and others. However, all of them rely on an additively homomorphic commitment scheme based on discrete log, and are therefore not post-quantum secure. In this work we present LatticeFold, the first lattice-based folding protocol based on the Module SIS problem. This folding protocol naturally leads to an efficient recursive lattice-based SNARK and an efficient PCD scheme. LatticeFold supports folding low-degree relations, such as R1CS, as well as high-degree relations, such as CCS. The key challenge is to construct a secure folding protocol that works with the Ajtai commitment scheme. The difficulty, is ensuring that extracted witnesses are low norm through many rounds of folding. We present a novel technique using the sumcheck protocol to ensure that extracted witnesses are always low norm no matter how many rounds of folding are used. Our evaluation of the final proof system suggests that it is as performant as Hypernova, while providing post-quantum security.
Paper Link: https://eprint.iacr.org/2024/257
Generating privacy-protected synthetic data using Secludy and MilvusZilliz
During this demo, the founders of Secludy will demonstrate how their system utilizes Milvus to store and manipulate embeddings for generating privacy-protected synthetic data. Their approach not only maintains the confidentiality of the original data but also enhances the utility and scalability of LLMs under privacy constraints. Attendees, including machine learning engineers, data scientists, and data managers, will witness first-hand how Secludy's integration with Milvus empowers organizations to harness the power of LLMs securely and efficiently.
4. Fog Computing
The collaboration of resources from
‘edge nodes’ for the purposes of
computation, storage, analysis and
management of devices and data
14. Architecture
RedisMessage Broker
Dashboard WX API
Solar Panel
API
Platform Services
Application
Services
Solar Panel
Aggregator
WX Collector
Solar Panel
Collector
Microservice
Boundary
Based in Austin, TX. API consulting, including architecture, design, deployment, and training. API 101 for Capital One, book, cloud native architecture
Colleague’s previous startup 10 years ago – managed solar panel farm, no viable comm solution, built mesh network, deployed Lua services for data collection/aggregation; Spoke about how we would architect it with today’s technologies… As we researched it, we found that a name had been assigned…
(Solar Power Technologies Inc now Drake)
This kind of architecture is what some are calling ‘Fog Computing’, which is defined as the collaboration of resources from ‘edge nodes’ for the purposes of computation, storage, analysis and management of devices and data
This isn’t a new idea. I was involved with a startup in 2001 with similar ideas. We wanted to combine the compute and storage resources of desktops on the edge of the Internet with peer-to-peer networks and SOAP web services to share and collaborate on data and business processes – rather than centrally-located web services within IT.
What’s different? The push toward modern, Hypermedia-based Web APIs, Microservice Architectures, and the introduction of Container-based software deployment and distribution. Microservices emerging as architectural pattern. Modularization, technology freedom, easier deployment. DEF: Loosely coupled service-oriented architecture with bounded contexts
Containers are popular for development and starting to emerge in production environments
Containers are popular for development and starting to emerge in production environments
More difficult deploys to bare metal, no declarative or immutable infrastructure
Specifically, containers, which provide us with the flexibility of virtualization we have from cloud infrastructure that allow us to spin up and down servers
Microservice spans processes/containers. Each process its own container to allow me upgrade the API without shutting off collectors. THINK ABOUT HOW YOU WANT TO CONTROL/MANAGE YOUR MICROSERVICE LIFECYCLE INCLUDING PROCESSES
Cat /proc/cpuinfo to show 4 core Pi
Show Docker Compose file and discuss each container, shared volumes and links
Launch each container
Open browser to dashboard and show data, discuss microservice pub/sub design
?Stop collectors, refresh browser?
Blue/green/Immutable infrastructure on Docker – JEROME’s TALK YESTERDAY ON IMMUTABLE INFRASTRUCTURE, SHARED NETWORK TO TROUBLESHOOT, and FASTER DEPLOYMENT BY BUILDING ONLY WHAT IS NECESSARY