The document presents a new model called Beckstrom's Law for valuing networks from an economic perspective. It proposes that the value of a network to each user can be calculated by determining the net benefit value that the presence of the network adds to all transactions conducted by that user over time. The total value of the network is then the sum of the individual net benefits to all users. This user-centric model provides an alternative to previous models that focused on valuing the entire network from a centralized view.
During September 2010, PR Meets Marketing evaluated the social media presence of 113 PR agencies. This presentation is a summary of those findings. For more information, go to www.prmeetsmarketing.com.
Cyberinfrastructure: Helping Push Research BoundariesCybera Inc.
Cyberinfrastructure can help push the boundaries of research by enabling novel applications and usage modes that exploit high-performance computing resources. Developing cyberinfrastructure requires an interplay between research requirements and infrastructure capabilities. Scientific applications like computing free energies can benefit from grid computing approaches using novel algorithms that allow interactive simulations and distributing large parallel simulations. This enables solving computationally intractable problems.
A Novel Target Marketing Approach based on Influence MaximizationSurendra Gadwal
This document proposes a new influence maximization approach called ANIM based on existing work. It uses a Yelp dataset to build a social network and calculate edge weights. ANIM is a greedy algorithm that iteratively selects nodes to maximize the difference in influence spread between the current set and adding another node. Experiments show ANIM has better influence spread and runtime than other algorithms like DegreeDiscount and NewGreedyIC. The goal is to identify influential customers for local businesses to target through online review sites.
During September 2010, PR Meets Marketing evaluated the social media presence of 113 PR agencies. This presentation is a summary of those findings. For more information, go to www.prmeetsmarketing.com.
Cyberinfrastructure: Helping Push Research BoundariesCybera Inc.
Cyberinfrastructure can help push the boundaries of research by enabling novel applications and usage modes that exploit high-performance computing resources. Developing cyberinfrastructure requires an interplay between research requirements and infrastructure capabilities. Scientific applications like computing free energies can benefit from grid computing approaches using novel algorithms that allow interactive simulations and distributing large parallel simulations. This enables solving computationally intractable problems.
A Novel Target Marketing Approach based on Influence MaximizationSurendra Gadwal
This document proposes a new influence maximization approach called ANIM based on existing work. It uses a Yelp dataset to build a social network and calculate edge weights. ANIM is a greedy algorithm that iteratively selects nodes to maximize the difference in influence spread between the current set and adding another node. Experiments show ANIM has better influence spread and runtime than other algorithms like DegreeDiscount and NewGreedyIC. The goal is to identify influential customers for local businesses to target through online review sites.
Economics Of Networks - Rod Beckstrom, National Cybersecurity Center, Departm...RodBeckstrom
These slides present a new universal economic model for valuing any network. This newer model is in effect a transactions, value added based model for network valuation.
Please note that Slideshare has distorted the economic green lines so they are no longer tangent to the optimal solutions lines. To be fixed..
Subthemes: Economics of Networks
Risk management for Security
Risk management for Cybersecurity (cyber security)
Metcalfe's Law
Reed's Law
Beckstrom's Law of Networks
Presented at: All Things Open 2019
Presented by: Samuel Taylor, Indeed
Find the transcript: https://www.samueltaylor.org/articles/open-source-machine-learning.html
Actors in a New "Highly Parallel" WorldFabio Correa
These are the slides related to my paper that I presented at the 2nd ICSE WUP. It gives a short background about the actor-model and describes what is my research project and its intended outcome.
These slides were used in an introductory lecture to Computational Finance presented in a third-year class on Machine Learning and Artificial Intelligence. The slides present three examples of machine learning applied to computational / quantitative finance. These include
1) Model calibration (stochastic process) using the stochastic Hill Climbing algorithms.
2) Predicting Credit Default rates using a Neural Network
3) Portfolio Optimization using the Particle Swarm Optimization Algorithm.
All of the Python code is available for download on GitHub. Link is available at the end of the slide-show.
https://quspeakerseries9.splashthat.com/
Lecture 1: Dr.Jorg Kientz
In this talk we outline the use of Machine Learning algorithms and their potential application. We focus on Deep Neural Networks. The aim is to outline different network architectures. Then, we wish to find a way of choosing the architecture that best fits from a model validation perspective. This approach is illustrated with examples.
Mac281 Wikinomics And Colloborative ProductionRob Jewitt
Slides used in the Level 2 Cyberculture lecture on mass collaboration. some formatting errors have occurred in the upload. Supporting blog post: http://www.remedialthoughts.com/2009/03/era-of-mass-collaboration.html
The document describes an honors thesis presented by Rhea Stadick that implemented the Elliptic Curve Digital Signature Algorithm (ECDSA) in Java using the NIST prime elliptic curves over GF(p). The thesis created a Java applet that provides the functionality of ECDSA including key generation, signature generation, and signature verification. The applet was embedded in a website for public use. The implementation details and analysis of the ECDSA applet code are discussed.
Machine Learning, Deep Learning and Data Analysis IntroductionTe-Yen Liu
The document provides an introduction and overview of machine learning, deep learning, and data analysis. It discusses key concepts like supervised and unsupervised learning. It also summarizes the speaker's experience taking online courses and studying resources to learn machine learning techniques. Examples of commonly used machine learning algorithms and neural network architectures are briefly outlined.
This document discusses advanced concepts in association analysis, including handling continuous and categorical attributes, multi-level association rules, and sequential patterns. It describes various methods for applying association rule mining to datasets with non-binary attributes, such as discretization techniques and statistics-based approaches. It also discusses challenges like varying discretization intervals and generating rules at different levels of a concept hierarchy. Finally, it provides examples of sequential patterns that can be mined, such as sequences of customer transactions or nuclear accident events.
Academic Summary Example. A Summary Of AAlison Carias
The story "Araby" by James Joyce takes place in Dublin, Ireland in the late 19th century, a time when Ireland was under British rule. The main character, a young boy, lives in a neighborhood that shows signs of poverty and neglect, reflecting the political and economic situation of Ireland at the time under British imperialism. His fascination with a young girl on his street and the bazaar called "Araby" represent his longing to escape from the bleakness of everyday life in Dublin through romance and adventure.
PR-297: Training data-efficient image transformers & distillation through att...Jinwon Lee
안녕하세요 TensorFlow Korea 논문 읽기 모임 PR-12의 297번째 리뷰입니다
어느덧 PR-12 시즌 3의 끝까지 논문 3편밖에 남지 않았네요.
시즌 3가 끝나면 바로 시즌 4의 새 멤버 모집이 시작될 예정입니다. 많은 관심과 지원 부탁드립니다~~
(멤버 모집 공지는 Facebook TensorFlow Korea 그룹에 올라올 예정입니다)
오늘 제가 리뷰한 논문은 Facebook의 Training data-efficient image transformers & distillation through attention 입니다.
Google에서 나왔던 ViT논문 이후에 convolution을 전혀 사용하지 않고 오직 attention만을 이용한 computer vision algorithm에 어느때보다 관심이 높아지고 있는데요
이 논문에서 제안한 DeiT 모델은 ViT와 같은 architecture를 사용하면서 ViT가 ImageNet data만으로는 성능이 잘 안나왔던 것에 비해서
Training 방법 개선과 새로운 Knowledge Distillation 방법을 사용하여 mageNet data 만으로 EfficientNet보다 뛰어난 성능을 보여주는 결과를 얻었습니다.
정말 CNN은 이제 서서히 사라지게 되는 것일까요? Attention이 computer vision도 정복하게 될 것인지....
개인적으로는 당분간은 attention 기반의 CV 논문이 쏟아질 거라고 확신하고, 또 여기에서 놀라운 일들이 일어날 수 있을 거라고 생각하고 있습니다
CNN은 10년간 많은 연구를 통해서 발전해왔지만, transformer는 이제 CV에 적용된 지 얼마 안된 시점이라서 더 기대가 크구요,
attention이 inductive bias가 가장 적은 형태의 모델이기 때문에 더 놀라운 이들을 만들 수 있을거라고 생각합니다
얼마 전에 나온 open AI의 DALL-E도 그 대표적인 예라고 할 수 있을 것 같습니다. Transformer의 또하나의 transformation이 궁금하신 분들은 아래 영상을 참고해주세요
영상링크: https://youtu.be/DjEvzeiWBTo
논문링크: https://arxiv.org/abs/2012.12877
Web Service QoS Prediction Approach in Mobile Internet Environmentsjins0618
Existing many Web service QoS prediction
approaches are very accurate in Internet environments,
however they cannot provide accurate prediction values in
Mobile Internet environments since QoS values of Web
services have great volatility. In this paper, we propose an
accurate Web service QoS prediction approach by weakening
the volatility of QoS data from Web services in Mobile Internet
environments. This approach contains three process, i.e., QoS
preprocessing, user similarity computing, and QoS predicting.
We have implemented our proposed approach with experiment
based on real world and synthetic datasets. The results show
that our approach outperforms other approaches in Mobile
Internet environments.
Dr. Bernhard Haslhofer presented on measurements and analytics techniques for cryptocurrency networks. He discussed network abstractions like transaction networks and address networks that can be used to cluster addresses and analyze cryptocurrency flows. As an example application, he described a ransomware study using the GraphSense analytics platform that identified the Locky ransomware family as generating the highest revenues, estimated at over $7 million USD, by tracing cryptocurrency transactions from seed ransomware addresses.
Test Expo 2009 Site Confidence & Seriti Consulting Load Test Case StudyStephen Thair
The document provides an overview of load testing a website, including tips on designing and conducting the test. It discusses determining test objectives and critical user journeys, setting targets for transactions and concurrent users, using analytics to inform the test design, and analyzing results to identify performance bottlenecks and take corrective action. Contact details are provided for vendors that can assist with load testing tools and services.
IRJET- Predicting Bitcoin Prices using Convolutional Neural Network Algor...IRJET Journal
This document discusses predicting bitcoin prices using a convolutional neural network algorithm. It begins with an abstract that describes the goal of understanding daily trends in the bitcoin market and identifying key factors that influence bitcoin value. It then discusses related work on cryptocurrency price prediction. The proposed system is described as using machine learning algorithms like CNNs to collect data on bitcoin prices and other factors, preprocess the data, train a model, and test it to predict bitcoin price changes. Experimental results are discussed briefly. In conclusion, the system aims to accurately predict price movements, and future work could develop a web application and improve the algorithms used.
This document summarizes a deep learning tutorial presentation given by Sri Krishnamurthy. The presentation introduced machine learning and deep learning concepts. It provided demonstrations of deep learning techniques like convolutional neural networks using Python libraries like Theano and Keras. Applications discussed included image recognition, speech recognition, question answering and cybersecurity. Recurrent neural networks and autoencoders were also introduced. The presenter emphasized the role of data, hardware and new approaches in advancing deep learning research.
NASA celebrated its 50th anniversary on October 1, 2008. Its mission is to pioneer space exploration, scientific discovery, and aeronautics research. NASA uses knowledge management strategies like NASAsphere, an internal social network, to facilitate communication across its 10 centers. NASAsphere helped capture expertise, accelerate information sharing, and create a collective intelligence within the agency. However, establishing expectations and moderating content were important to focus the network on work issues.
HCL Notes und Domino Lizenzkostenreduzierung in der Welt von DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-und-domino-lizenzkostenreduzierung-in-der-welt-von-dlau/
DLAU und die Lizenzen nach dem CCB- und CCX-Modell sind für viele in der HCL-Community seit letztem Jahr ein heißes Thema. Als Notes- oder Domino-Kunde haben Sie vielleicht mit unerwartet hohen Benutzerzahlen und Lizenzgebühren zu kämpfen. Sie fragen sich vielleicht, wie diese neue Art der Lizenzierung funktioniert und welchen Nutzen sie Ihnen bringt. Vor allem wollen Sie sicherlich Ihr Budget einhalten und Kosten sparen, wo immer möglich. Das verstehen wir und wir möchten Ihnen dabei helfen!
Wir erklären Ihnen, wie Sie häufige Konfigurationsprobleme lösen können, die dazu führen können, dass mehr Benutzer gezählt werden als nötig, und wie Sie überflüssige oder ungenutzte Konten identifizieren und entfernen können, um Geld zu sparen. Es gibt auch einige Ansätze, die zu unnötigen Ausgaben führen können, z. B. wenn ein Personendokument anstelle eines Mail-Ins für geteilte Mailboxen verwendet wird. Wir zeigen Ihnen solche Fälle und deren Lösungen. Und natürlich erklären wir Ihnen das neue Lizenzmodell.
Nehmen Sie an diesem Webinar teil, bei dem HCL-Ambassador Marc Thomas und Gastredner Franz Walder Ihnen diese neue Welt näherbringen. Es vermittelt Ihnen die Tools und das Know-how, um den Überblick zu bewahren. Sie werden in der Lage sein, Ihre Kosten durch eine optimierte Domino-Konfiguration zu reduzieren und auch in Zukunft gering zu halten.
Diese Themen werden behandelt
- Reduzierung der Lizenzkosten durch Auffinden und Beheben von Fehlkonfigurationen und überflüssigen Konten
- Wie funktionieren CCB- und CCX-Lizenzen wirklich?
- Verstehen des DLAU-Tools und wie man es am besten nutzt
- Tipps für häufige Problembereiche, wie z. B. Team-Postfächer, Funktions-/Testbenutzer usw.
- Praxisbeispiele und Best Practices zum sofortigen Umsetzen
Digital Banking in the Cloud: How Citizens Bank Unlocked Their MainframePrecisely
Inconsistent user experience and siloed data, high costs, and changing customer expectations – Citizens Bank was experiencing these challenges while it was attempting to deliver a superior digital banking experience for its clients. Its core banking applications run on the mainframe and Citizens was using legacy utilities to get the critical mainframe data to feed customer-facing channels, like call centers, web, and mobile. Ultimately, this led to higher operating costs (MIPS), delayed response times, and longer time to market.
Ever-changing customer expectations demand more modern digital experiences, and the bank needed to find a solution that could provide real-time data to its customer channels with low latency and operating costs. Join this session to learn how Citizens is leveraging Precisely to replicate mainframe data to its customer channels and deliver on their “modern digital bank” experiences.
More Related Content
Similar to Ncsc Value Of Networks Rod Beckstrom 090312 Final
Economics Of Networks - Rod Beckstrom, National Cybersecurity Center, Departm...RodBeckstrom
These slides present a new universal economic model for valuing any network. This newer model is in effect a transactions, value added based model for network valuation.
Please note that Slideshare has distorted the economic green lines so they are no longer tangent to the optimal solutions lines. To be fixed..
Subthemes: Economics of Networks
Risk management for Security
Risk management for Cybersecurity (cyber security)
Metcalfe's Law
Reed's Law
Beckstrom's Law of Networks
Presented at: All Things Open 2019
Presented by: Samuel Taylor, Indeed
Find the transcript: https://www.samueltaylor.org/articles/open-source-machine-learning.html
Actors in a New "Highly Parallel" WorldFabio Correa
These are the slides related to my paper that I presented at the 2nd ICSE WUP. It gives a short background about the actor-model and describes what is my research project and its intended outcome.
These slides were used in an introductory lecture to Computational Finance presented in a third-year class on Machine Learning and Artificial Intelligence. The slides present three examples of machine learning applied to computational / quantitative finance. These include
1) Model calibration (stochastic process) using the stochastic Hill Climbing algorithms.
2) Predicting Credit Default rates using a Neural Network
3) Portfolio Optimization using the Particle Swarm Optimization Algorithm.
All of the Python code is available for download on GitHub. Link is available at the end of the slide-show.
https://quspeakerseries9.splashthat.com/
Lecture 1: Dr.Jorg Kientz
In this talk we outline the use of Machine Learning algorithms and their potential application. We focus on Deep Neural Networks. The aim is to outline different network architectures. Then, we wish to find a way of choosing the architecture that best fits from a model validation perspective. This approach is illustrated with examples.
Mac281 Wikinomics And Colloborative ProductionRob Jewitt
Slides used in the Level 2 Cyberculture lecture on mass collaboration. some formatting errors have occurred in the upload. Supporting blog post: http://www.remedialthoughts.com/2009/03/era-of-mass-collaboration.html
The document describes an honors thesis presented by Rhea Stadick that implemented the Elliptic Curve Digital Signature Algorithm (ECDSA) in Java using the NIST prime elliptic curves over GF(p). The thesis created a Java applet that provides the functionality of ECDSA including key generation, signature generation, and signature verification. The applet was embedded in a website for public use. The implementation details and analysis of the ECDSA applet code are discussed.
Machine Learning, Deep Learning and Data Analysis IntroductionTe-Yen Liu
The document provides an introduction and overview of machine learning, deep learning, and data analysis. It discusses key concepts like supervised and unsupervised learning. It also summarizes the speaker's experience taking online courses and studying resources to learn machine learning techniques. Examples of commonly used machine learning algorithms and neural network architectures are briefly outlined.
This document discusses advanced concepts in association analysis, including handling continuous and categorical attributes, multi-level association rules, and sequential patterns. It describes various methods for applying association rule mining to datasets with non-binary attributes, such as discretization techniques and statistics-based approaches. It also discusses challenges like varying discretization intervals and generating rules at different levels of a concept hierarchy. Finally, it provides examples of sequential patterns that can be mined, such as sequences of customer transactions or nuclear accident events.
Academic Summary Example. A Summary Of AAlison Carias
The story "Araby" by James Joyce takes place in Dublin, Ireland in the late 19th century, a time when Ireland was under British rule. The main character, a young boy, lives in a neighborhood that shows signs of poverty and neglect, reflecting the political and economic situation of Ireland at the time under British imperialism. His fascination with a young girl on his street and the bazaar called "Araby" represent his longing to escape from the bleakness of everyday life in Dublin through romance and adventure.
PR-297: Training data-efficient image transformers & distillation through att...Jinwon Lee
안녕하세요 TensorFlow Korea 논문 읽기 모임 PR-12의 297번째 리뷰입니다
어느덧 PR-12 시즌 3의 끝까지 논문 3편밖에 남지 않았네요.
시즌 3가 끝나면 바로 시즌 4의 새 멤버 모집이 시작될 예정입니다. 많은 관심과 지원 부탁드립니다~~
(멤버 모집 공지는 Facebook TensorFlow Korea 그룹에 올라올 예정입니다)
오늘 제가 리뷰한 논문은 Facebook의 Training data-efficient image transformers & distillation through attention 입니다.
Google에서 나왔던 ViT논문 이후에 convolution을 전혀 사용하지 않고 오직 attention만을 이용한 computer vision algorithm에 어느때보다 관심이 높아지고 있는데요
이 논문에서 제안한 DeiT 모델은 ViT와 같은 architecture를 사용하면서 ViT가 ImageNet data만으로는 성능이 잘 안나왔던 것에 비해서
Training 방법 개선과 새로운 Knowledge Distillation 방법을 사용하여 mageNet data 만으로 EfficientNet보다 뛰어난 성능을 보여주는 결과를 얻었습니다.
정말 CNN은 이제 서서히 사라지게 되는 것일까요? Attention이 computer vision도 정복하게 될 것인지....
개인적으로는 당분간은 attention 기반의 CV 논문이 쏟아질 거라고 확신하고, 또 여기에서 놀라운 일들이 일어날 수 있을 거라고 생각하고 있습니다
CNN은 10년간 많은 연구를 통해서 발전해왔지만, transformer는 이제 CV에 적용된 지 얼마 안된 시점이라서 더 기대가 크구요,
attention이 inductive bias가 가장 적은 형태의 모델이기 때문에 더 놀라운 이들을 만들 수 있을거라고 생각합니다
얼마 전에 나온 open AI의 DALL-E도 그 대표적인 예라고 할 수 있을 것 같습니다. Transformer의 또하나의 transformation이 궁금하신 분들은 아래 영상을 참고해주세요
영상링크: https://youtu.be/DjEvzeiWBTo
논문링크: https://arxiv.org/abs/2012.12877
Web Service QoS Prediction Approach in Mobile Internet Environmentsjins0618
Existing many Web service QoS prediction
approaches are very accurate in Internet environments,
however they cannot provide accurate prediction values in
Mobile Internet environments since QoS values of Web
services have great volatility. In this paper, we propose an
accurate Web service QoS prediction approach by weakening
the volatility of QoS data from Web services in Mobile Internet
environments. This approach contains three process, i.e., QoS
preprocessing, user similarity computing, and QoS predicting.
We have implemented our proposed approach with experiment
based on real world and synthetic datasets. The results show
that our approach outperforms other approaches in Mobile
Internet environments.
Dr. Bernhard Haslhofer presented on measurements and analytics techniques for cryptocurrency networks. He discussed network abstractions like transaction networks and address networks that can be used to cluster addresses and analyze cryptocurrency flows. As an example application, he described a ransomware study using the GraphSense analytics platform that identified the Locky ransomware family as generating the highest revenues, estimated at over $7 million USD, by tracing cryptocurrency transactions from seed ransomware addresses.
Test Expo 2009 Site Confidence & Seriti Consulting Load Test Case StudyStephen Thair
The document provides an overview of load testing a website, including tips on designing and conducting the test. It discusses determining test objectives and critical user journeys, setting targets for transactions and concurrent users, using analytics to inform the test design, and analyzing results to identify performance bottlenecks and take corrective action. Contact details are provided for vendors that can assist with load testing tools and services.
IRJET- Predicting Bitcoin Prices using Convolutional Neural Network Algor...IRJET Journal
This document discusses predicting bitcoin prices using a convolutional neural network algorithm. It begins with an abstract that describes the goal of understanding daily trends in the bitcoin market and identifying key factors that influence bitcoin value. It then discusses related work on cryptocurrency price prediction. The proposed system is described as using machine learning algorithms like CNNs to collect data on bitcoin prices and other factors, preprocess the data, train a model, and test it to predict bitcoin price changes. Experimental results are discussed briefly. In conclusion, the system aims to accurately predict price movements, and future work could develop a web application and improve the algorithms used.
This document summarizes a deep learning tutorial presentation given by Sri Krishnamurthy. The presentation introduced machine learning and deep learning concepts. It provided demonstrations of deep learning techniques like convolutional neural networks using Python libraries like Theano and Keras. Applications discussed included image recognition, speech recognition, question answering and cybersecurity. Recurrent neural networks and autoencoders were also introduced. The presenter emphasized the role of data, hardware and new approaches in advancing deep learning research.
NASA celebrated its 50th anniversary on October 1, 2008. Its mission is to pioneer space exploration, scientific discovery, and aeronautics research. NASA uses knowledge management strategies like NASAsphere, an internal social network, to facilitate communication across its 10 centers. NASAsphere helped capture expertise, accelerate information sharing, and create a collective intelligence within the agency. However, establishing expectations and moderating content were important to focus the network on work issues.
Similar to Ncsc Value Of Networks Rod Beckstrom 090312 Final (20)
HCL Notes und Domino Lizenzkostenreduzierung in der Welt von DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-und-domino-lizenzkostenreduzierung-in-der-welt-von-dlau/
DLAU und die Lizenzen nach dem CCB- und CCX-Modell sind für viele in der HCL-Community seit letztem Jahr ein heißes Thema. Als Notes- oder Domino-Kunde haben Sie vielleicht mit unerwartet hohen Benutzerzahlen und Lizenzgebühren zu kämpfen. Sie fragen sich vielleicht, wie diese neue Art der Lizenzierung funktioniert und welchen Nutzen sie Ihnen bringt. Vor allem wollen Sie sicherlich Ihr Budget einhalten und Kosten sparen, wo immer möglich. Das verstehen wir und wir möchten Ihnen dabei helfen!
Wir erklären Ihnen, wie Sie häufige Konfigurationsprobleme lösen können, die dazu führen können, dass mehr Benutzer gezählt werden als nötig, und wie Sie überflüssige oder ungenutzte Konten identifizieren und entfernen können, um Geld zu sparen. Es gibt auch einige Ansätze, die zu unnötigen Ausgaben führen können, z. B. wenn ein Personendokument anstelle eines Mail-Ins für geteilte Mailboxen verwendet wird. Wir zeigen Ihnen solche Fälle und deren Lösungen. Und natürlich erklären wir Ihnen das neue Lizenzmodell.
Nehmen Sie an diesem Webinar teil, bei dem HCL-Ambassador Marc Thomas und Gastredner Franz Walder Ihnen diese neue Welt näherbringen. Es vermittelt Ihnen die Tools und das Know-how, um den Überblick zu bewahren. Sie werden in der Lage sein, Ihre Kosten durch eine optimierte Domino-Konfiguration zu reduzieren und auch in Zukunft gering zu halten.
Diese Themen werden behandelt
- Reduzierung der Lizenzkosten durch Auffinden und Beheben von Fehlkonfigurationen und überflüssigen Konten
- Wie funktionieren CCB- und CCX-Lizenzen wirklich?
- Verstehen des DLAU-Tools und wie man es am besten nutzt
- Tipps für häufige Problembereiche, wie z. B. Team-Postfächer, Funktions-/Testbenutzer usw.
- Praxisbeispiele und Best Practices zum sofortigen Umsetzen
Digital Banking in the Cloud: How Citizens Bank Unlocked Their MainframePrecisely
Inconsistent user experience and siloed data, high costs, and changing customer expectations – Citizens Bank was experiencing these challenges while it was attempting to deliver a superior digital banking experience for its clients. Its core banking applications run on the mainframe and Citizens was using legacy utilities to get the critical mainframe data to feed customer-facing channels, like call centers, web, and mobile. Ultimately, this led to higher operating costs (MIPS), delayed response times, and longer time to market.
Ever-changing customer expectations demand more modern digital experiences, and the bank needed to find a solution that could provide real-time data to its customer channels with low latency and operating costs. Join this session to learn how Citizens is leveraging Precisely to replicate mainframe data to its customer channels and deliver on their “modern digital bank” experiences.
5th LF Energy Power Grid Model Meet-up SlidesDanBrown980551
5th Power Grid Model Meet-up
It is with great pleasure that we extend to you an invitation to the 5th Power Grid Model Meet-up, scheduled for 6th June 2024. This event will adopt a hybrid format, allowing participants to join us either through an online Mircosoft Teams session or in person at TU/e located at Den Dolech 2, Eindhoven, Netherlands. The meet-up will be hosted by Eindhoven University of Technology (TU/e), a research university specializing in engineering science & technology.
Power Grid Model
The global energy transition is placing new and unprecedented demands on Distribution System Operators (DSOs). Alongside upgrades to grid capacity, processes such as digitization, capacity optimization, and congestion management are becoming vital for delivering reliable services.
Power Grid Model is an open source project from Linux Foundation Energy and provides a calculation engine that is increasingly essential for DSOs. It offers a standards-based foundation enabling real-time power systems analysis, simulations of electrical power grids, and sophisticated what-if analysis. In addition, it enables in-depth studies and analysis of the electrical power grid’s behavior and performance. This comprehensive model incorporates essential factors such as power generation capacity, electrical losses, voltage levels, power flows, and system stability.
Power Grid Model is currently being applied in a wide variety of use cases, including grid planning, expansion, reliability, and congestion studies. It can also help in analyzing the impact of renewable energy integration, assessing the effects of disturbances or faults, and developing strategies for grid control and optimization.
What to expect
For the upcoming meetup we are organizing, we have an exciting lineup of activities planned:
-Insightful presentations covering two practical applications of the Power Grid Model.
-An update on the latest advancements in Power Grid -Model technology during the first and second quarters of 2024.
-An interactive brainstorming session to discuss and propose new feature requests.
-An opportunity to connect with fellow Power Grid Model enthusiasts and users.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/how-axelera-ai-uses-digital-compute-in-memory-to-deliver-fast-and-energy-efficient-computer-vision-a-presentation-from-axelera-ai/
Bram Verhoef, Head of Machine Learning at Axelera AI, presents the “How Axelera AI Uses Digital Compute-in-memory to Deliver Fast and Energy-efficient Computer Vision” tutorial at the May 2024 Embedded Vision Summit.
As artificial intelligence inference transitions from cloud environments to edge locations, computer vision applications achieve heightened responsiveness, reliability and privacy. This migration, however, introduces the challenge of operating within the stringent confines of resource constraints typical at the edge, including small form factors, low energy budgets and diminished memory and computational capacities. Axelera AI addresses these challenges through an innovative approach of performing digital computations within memory itself. This technique facilitates the realization of high-performance, energy-efficient and cost-effective computer vision capabilities at the thin and thick edge, extending the frontier of what is achievable with current technologies.
In this presentation, Verhoef unveils his company’s pioneering chip technology and demonstrates its capacity to deliver exceptional frames-per-second performance across a range of standard computer vision networks typical of applications in security, surveillance and the industrial sector. This shows that advanced computer vision can be accessible and efficient, even at the very edge of our technological ecosystem.
Dandelion Hashtable: beyond billion requests per second on a commodity serverAntonios Katsarakis
This slide deck presents DLHT, a concurrent in-memory hashtable. Despite efforts to optimize hashtables, that go as far as sacrificing core functionality, state-of-the-art designs still incur multiple memory accesses per request and block request processing in three cases. First, most hashtables block while waiting for data to be retrieved from memory. Second, open-addressing designs, which represent the current state-of-the-art, either cannot free index slots on deletes or must block all requests to do so. Third, index resizes block every request until all objects are copied to the new index. Defying folklore wisdom, DLHT forgoes open-addressing and adopts a fully-featured and memory-aware closed-addressing design based on bounded cache-line-chaining. This design offers lock-free index operations and deletes that free slots instantly, (2) completes most requests with a single memory access, (3) utilizes software prefetching to hide memory latencies, and (4) employs a novel non-blocking and parallel resizing. In a commodity server and a memory-resident workload, DLHT surpasses 1.6B requests per second and provides 3.5x (12x) the throughput of the state-of-the-art closed-addressing (open-addressing) resizable hashtable on Gets (Deletes).
[OReilly Superstream] Occupy the Space: A grassroots guide to engineering (an...Jason Yip
The typical problem in product engineering is not bad strategy, so much as “no strategy”. This leads to confusion, lack of motivation, and incoherent action. The next time you look for a strategy and find an empty space, instead of waiting for it to be filled, I will show you how to fill it in yourself. If you’re wrong, it forces a correction. If you’re right, it helps create focus. I’ll share how I’ve approached this in the past, both what works and lessons for what didn’t work so well.
Connector Corner: Seamlessly power UiPath Apps, GenAI with prebuilt connectorsDianaGray10
Join us to learn how UiPath Apps can directly and easily interact with prebuilt connectors via Integration Service--including Salesforce, ServiceNow, Open GenAI, and more.
The best part is you can achieve this without building a custom workflow! Say goodbye to the hassle of using separate automations to call APIs. By seamlessly integrating within App Studio, you can now easily streamline your workflow, while gaining direct access to our Connector Catalog of popular applications.
We’ll discuss and demo the benefits of UiPath Apps and connectors including:
Creating a compelling user experience for any software, without the limitations of APIs.
Accelerating the app creation process, saving time and effort
Enjoying high-performance CRUD (create, read, update, delete) operations, for
seamless data management.
Speakers:
Russell Alfeche, Technology Leader, RPA at qBotic and UiPath MVP
Charlie Greenberg, host
How to Interpret Trends in the Kalyan Rajdhani Mix Chart.pdfChart Kalyan
A Mix Chart displays historical data of numbers in a graphical or tabular form. The Kalyan Rajdhani Mix Chart specifically shows the results of a sequence of numbers over different periods.
"Choosing proper type of scaling", Olena SyrotaFwdays
Imagine an IoT processing system that is already quite mature and production-ready and for which client coverage is growing and scaling and performance aspects are life and death questions. The system has Redis, MongoDB, and stream processing based on ksqldb. In this talk, firstly, we will analyze scaling approaches and then select the proper ones for our system.
zkStudyClub - LatticeFold: A Lattice-based Folding Scheme and its Application...Alex Pruden
Folding is a recent technique for building efficient recursive SNARKs. Several elegant folding protocols have been proposed, such as Nova, Supernova, Hypernova, Protostar, and others. However, all of them rely on an additively homomorphic commitment scheme based on discrete log, and are therefore not post-quantum secure. In this work we present LatticeFold, the first lattice-based folding protocol based on the Module SIS problem. This folding protocol naturally leads to an efficient recursive lattice-based SNARK and an efficient PCD scheme. LatticeFold supports folding low-degree relations, such as R1CS, as well as high-degree relations, such as CCS. The key challenge is to construct a secure folding protocol that works with the Ajtai commitment scheme. The difficulty, is ensuring that extracted witnesses are low norm through many rounds of folding. We present a novel technique using the sumcheck protocol to ensure that extracted witnesses are always low norm no matter how many rounds of folding are used. Our evaluation of the final proof system suggests that it is as performant as Hypernova, while providing post-quantum security.
Paper Link: https://eprint.iacr.org/2024/257
Have you ever been confused by the myriad of choices offered by AWS for hosting a website or an API?
Lambda, Elastic Beanstalk, Lightsail, Amplify, S3 (and more!) can each host websites + APIs. But which one should we choose?
Which one is cheapest? Which one is fastest? Which one will scale to meet our needs?
Join me in this session as we dive into each AWS hosting service to determine which one is best for your scenario and explain why!
How information systems are built or acquired puts information, which is what they should be about, in a secondary place. Our language adapted accordingly, and we no longer talk about information systems but applications. Applications evolved in a way to break data into diverse fragments, tightly coupled with applications and expensive to integrate. The result is technical debt, which is re-paid by taking even bigger "loans", resulting in an ever-increasing technical debt. Software engineering and procurement practices work in sync with market forces to maintain this trend. This talk demonstrates how natural this situation is. The question is: can something be done to reverse the trend?
The Microsoft 365 Migration Tutorial For Beginner.pptxoperationspcvita
This presentation will help you understand the power of Microsoft 365. However, we have mentioned every productivity app included in Office 365. Additionally, we have suggested the migration situation related to Office 365 and how we can help you.
You can also read: https://www.systoolsgroup.com/updates/office-365-tenant-to-tenant-migration-step-by-step-complete-guide/
Introduction of Cybersecurity with OSS at Code Europe 2024Hiroshi SHIBATA
I develop the Ruby programming language, RubyGems, and Bundler, which are package managers for Ruby. Today, I will introduce how to enhance the security of your application using open-source software (OSS) examples from Ruby and RubyGems.
The first topic is CVE (Common Vulnerabilities and Exposures). I have published CVEs many times. But what exactly is a CVE? I'll provide a basic understanding of CVEs and explain how to detect and handle vulnerabilities in OSS.
Next, let's discuss package managers. Package managers play a critical role in the OSS ecosystem. I'll explain how to manage library dependencies in your application.
I'll share insights into how the Ruby and RubyGems core team works to keep our ecosystem safe. By the end of this talk, you'll have a better understanding of how to safeguard your code.
Building Production Ready Search Pipelines with Spark and MilvusZilliz
Spark is the widely used ETL tool for processing, indexing and ingesting data to serving stack for search. Milvus is the production-ready open-source vector database. In this talk we will show how to use Spark to process unstructured data to extract vector representations, and push the vectors to Milvus vector database for search serving.
4. Where
Vi,j = net present value of all transactions of k = 1 through n to individual i
with respect to network j
i = one user of the network
j = identifies one network or network system
Bi,k = the benefit value of transaction k to individual i
Ci,l = the cost of transaction l to individual i
rk and rl = the discount rate of interest to the time of transaction k or l
tk or tl = the elapsed time in years to transaction k or 1
To simplify subsequent derivations of the equation, the net present value benefit
and cost terms will be simplified without the discount function and be italicized as
simply Bi,k and Ci,l. Other terms italicized will also express net present values of those
terms, and a simple sigma will represent the relevant series of transactions over any
defined time period.
Thus the equation is simplified to:
Vi,j = ∑ Bi,k ‐ ∑ C i,l
Valuing an Entire Network
The above equation represents the value of the Internet to one user. The value of
the entire Internet or any network Nj is the summation of the value of that network
to all individuals or entities, i through n, engaged in transactions on that network.
Thus a summation term is now added before Vi,j.
n
∑ Vi,j = ∑ Bi,k ‐ ∑ C i,l
i=1
The Sum Value of All Networks
Similarly, to value all networks to all users in the world simply requires a
summation of all networks j = 1 through n.
n n
∑∑ Vi,j = ∑ Bi,k ‐ ∑ C i,l
j=1 i=1
4
7. SCi,j = SIi,o + L i,p
Where SCi,j = net present value of all Security Costs to
individual i with respect to network j
In other words, security is a cost and security losses are also a cost. This now
provides a clear economic function to optimize (in this case, minimize).
Core Security Risk Optimization Function
Minimize SCi,j = SIi,o + L i,p
Security costs (investments plus losses) are optimal when they have been
minimized. This leads to an important insight. One dollar of security investments is
only a benefit when it reduces expected losses by more than a dollar. In other
words, security dollars should be invested where they produce the greatest drop in
expected losses, and security investments should increase just until they reduce a
dollar in expected losses, and no more.
Economically, this introduces a new important trade‐off: security dollars versus loss
dollars. The question for any organization then becomes, how does the expected
loss function drop with increases in security dollars?
While calculating the expected losses is extremely difficult, it is nonetheless vital, for
without it, investments in security lack an economic basis. Recent discussions with
private sector and governmental Chief Information Security Officers have confirmed
the need for such an economic model and analysis.
It can also be argued that the short term loss value functions may be greater than
the steady state value added of a network. In other words, while the presence of a
network may normally only contribute $10 billion in economic value each day to a
country, the complete shut down of that network may have a higher short term cost.
This latter value is a “disruption value losses” and several economic pieces have
been drafted on this topic. For example, at the individual company level, losses have
been estimated at $2 million per incident.3 If numerous companies are involved in a
systemic network disruption, the losses would be substantial. In short, the
disruption cost is equal to the economic loss of a network being suddenly shut off, as
opposed to how much it adds on a normal periodic basis.
While in the short term a disruption value could be greater than the conventional
value added of the network, in the long term, the two should converge in that over
time the lost value add would lead to an equal and offsetting investment in replacing
the network functions with a new network.
7
8.
Illustration 1. Plotting Security Investments Versus Losses
$ Loss
Security Investment (SI) $
In Illustration 1, as the security investment increases along the horizontal axis, the
losses, represented by the green line, drop. Initially, the drop is very steep as the
first dollars invested in security yield a significant decrease in losses. However, as
the level of security dollars invested increases, at some point, there are diminishing
returns. In terms of network security, this roughly conforms to the observed reality
that a number of key security steps such as installing all software patches and the
like tends to reduce 80% of the problems. In other words, the first easy investments
have a relatively high payoff. To close the gap on the other 20%, however, becomes
increasingly expensive for each 1% reduction in risk.4
8
9.
Illustration 2. Optimal Security Investment
$ Loss
Security Investment (SI) $
The optimal security investment occurs on the loss function line where it is tangent
to a 45 degree line, or where one dollar of security investment equals one dollar
decrease in expected losses. This point is represented by S’ in terms of security
dollars.
Summary
It is important and useful to have a simple and robust model for estimating the
ongoing economic value of networks. Such a model makes it possible to calculate or
at least estimate the value of the presence of networks. The framework presented
in this article provides a simple model that is consistent with other primary
economic models such as transaction accounting, cost accounting and net present
value analysis. The model answers a very important question ‐ what is the value of a
network, while opening up new questions to be answered, such as how can we best
value the benefit of transactions?
9