The document discusses FaunaDB's deterministic transaction protocol. It begins with an example of two customers trying to purchase the same item and the steps the protocol takes to handle the transaction in a consistent way across distributed replicas. Coordinators first read and calculate transaction effects, then submit transactions to a log. Replicas validate that reads match before applying transactions to preserve serializability. If reads do not match, the transaction is aborted and retried.
Opening presentation given at AdhearsionConf 2013. This talks about a vision for the future of the Adhearsion project as well as the future of real-time communications applications.
IT Monitoring in the Era of Containers | Luca Deri Founder & Project Lead | ntopInfluxData
Network traffic monitoring tools are traditionally based on the packet paradigm where tools need to analyse each incoming and outgoing packet. As systems are moving towards a micro-service oriented architecture based on containers, the packet paradigm is no longer enough to provide IT visibility as services interact inside a system and not over a network where it is possible to install network sensors. This talk will explain how open source tools designed by ntop on top of InfluxDB allow packet monitoring tools to be complemented with container monitoring and thus implement a lightweight visibility solution for modern IT infrastructures.
Where’s Wally? How to Privately Discover your Friends on the InternetPanagiotis Papadopoulos
Internet friends who would like to connect with each other (e.g., VoIP, chat) use point-to-point communication applications such as Skype or WhatsApp. Apart from providing the necessary communication channel, these applications also facilitate contact discovery, where users upload their address-book and learn the network address of their friends. Although handy, this discovery process comes with a significant privacy cost: users are forced to reveal to the service provider every person they are socially connected with, even if they do not ever communicate with them through the app. In this paper, we show that it is possible to implement a scalable User Discovery service, without requiring any centralized entity that users have to blindly trust. Specifically, we distribute the maintenance of the users’ contact information, and allow their friends to query for it, just as they normally query the network for machine services. We implement our approach in PROUD: a distributed privacy-preserving User Discovery service, which capitalizes on DNS. The prevalence of DNS makes PROUD immediately applicable, able to scale to millions of users. Preliminary evaluation shows that PROUD provides competitive performance for all practical purposes, imposing an overhead of less than 0.3 sec per operation.
FreeBSD: The Next 10 Years (MeetBSD 2014)iXsystems
Watch the video here: http://bit.ly/11wK25T.
These are the slides for Jordan Hubbard's presentation, "FreeBSD: The Next 10 Years", given at MeetBSD California 2014 in San Jose.
Visit us at www.iXsystems.com or www.FreeNAS.org to learn more.
Breaking Smart Speakers: We are Listening to You.Priyanka Aash
"In the past two years, smart speakers have become the most popular IoT device, Amazon_ Google and Apple have introduced their own smart speaker products. Most of these smart speakers have natural language recognition, chat, music playback, IoT device control, shopping, and so on. Manufacturers use artificial intelligence technology to make smart speakers have similar human capabilities in the chat conversation. However, with the smart speakers coming into more and more homes, and the function is becoming more powerful, its security has been questioned by many people. People are worried that smart speakers will be hacked to leak their privacy, and our research proves that this concern is very necessary.
In this talk, we will present how to use multiple vulnerabilities to achieve remote attack some of the most popular smart speakers. Our final attack effects include silent listening, control speaker speaking content and other demonstrations. And we're also going to talk about how to extract firmware from BGA packages Flash chips such as EMMC, EMCP, NAND Flash, etc. In addition, it contains how to turn on debug interfaces and get root privileges by modifying firmware content and Re-soldering Flash chips, which can be of great help for subsequent vulnerability analysis and debugging. Finally, we will play several demo videos to demonstrate how we can remotely access some Smart Speaker Root permissions and use smart speakers for eavesdropping and playing voice."
Interactive real time dashboards on data streams using Kafka, Druid, and Supe...DataWorks Summit
When interacting with analytics dashboards, in order to achieve a smooth user experience, two major key requirements are quick response time and data freshness. To meet the requirements of creating fast interactive BI dashboards over streaming data, organizations often struggle with selecting a proper serving layer.
Cluster computing frameworks such as Hadoop or Spark work well for storing large volumes of data, although they are not optimized for making it available for queries in real time. Long query latencies also make these systems suboptimal choices for powering interactive dashboards and BI use cases.
This talk presents an open source real time data analytics stack using Apache Kafka, Druid, and Superset. The stack combines the low-latency streaming and processing capabilities of Kafka with Druid, which enables immediate exploration and provides low-latency queries over the ingested data streams. Superset provides the visualization and dashboarding that integrates nicely with Druid. In this talk we will discuss why this architecture is well suited to interactive applications over streaming data, present an end-to-end demo of complete stack, discuss its key features, and discuss performance characteristics from real-world use cases.
Speaker
Nishant Bangarwa, Software Engineer, Hortonworks
Opening presentation given at AdhearsionConf 2013. This talks about a vision for the future of the Adhearsion project as well as the future of real-time communications applications.
IT Monitoring in the Era of Containers | Luca Deri Founder & Project Lead | ntopInfluxData
Network traffic monitoring tools are traditionally based on the packet paradigm where tools need to analyse each incoming and outgoing packet. As systems are moving towards a micro-service oriented architecture based on containers, the packet paradigm is no longer enough to provide IT visibility as services interact inside a system and not over a network where it is possible to install network sensors. This talk will explain how open source tools designed by ntop on top of InfluxDB allow packet monitoring tools to be complemented with container monitoring and thus implement a lightweight visibility solution for modern IT infrastructures.
Where’s Wally? How to Privately Discover your Friends on the InternetPanagiotis Papadopoulos
Internet friends who would like to connect with each other (e.g., VoIP, chat) use point-to-point communication applications such as Skype or WhatsApp. Apart from providing the necessary communication channel, these applications also facilitate contact discovery, where users upload their address-book and learn the network address of their friends. Although handy, this discovery process comes with a significant privacy cost: users are forced to reveal to the service provider every person they are socially connected with, even if they do not ever communicate with them through the app. In this paper, we show that it is possible to implement a scalable User Discovery service, without requiring any centralized entity that users have to blindly trust. Specifically, we distribute the maintenance of the users’ contact information, and allow their friends to query for it, just as they normally query the network for machine services. We implement our approach in PROUD: a distributed privacy-preserving User Discovery service, which capitalizes on DNS. The prevalence of DNS makes PROUD immediately applicable, able to scale to millions of users. Preliminary evaluation shows that PROUD provides competitive performance for all practical purposes, imposing an overhead of less than 0.3 sec per operation.
FreeBSD: The Next 10 Years (MeetBSD 2014)iXsystems
Watch the video here: http://bit.ly/11wK25T.
These are the slides for Jordan Hubbard's presentation, "FreeBSD: The Next 10 Years", given at MeetBSD California 2014 in San Jose.
Visit us at www.iXsystems.com or www.FreeNAS.org to learn more.
Breaking Smart Speakers: We are Listening to You.Priyanka Aash
"In the past two years, smart speakers have become the most popular IoT device, Amazon_ Google and Apple have introduced their own smart speaker products. Most of these smart speakers have natural language recognition, chat, music playback, IoT device control, shopping, and so on. Manufacturers use artificial intelligence technology to make smart speakers have similar human capabilities in the chat conversation. However, with the smart speakers coming into more and more homes, and the function is becoming more powerful, its security has been questioned by many people. People are worried that smart speakers will be hacked to leak their privacy, and our research proves that this concern is very necessary.
In this talk, we will present how to use multiple vulnerabilities to achieve remote attack some of the most popular smart speakers. Our final attack effects include silent listening, control speaker speaking content and other demonstrations. And we're also going to talk about how to extract firmware from BGA packages Flash chips such as EMMC, EMCP, NAND Flash, etc. In addition, it contains how to turn on debug interfaces and get root privileges by modifying firmware content and Re-soldering Flash chips, which can be of great help for subsequent vulnerability analysis and debugging. Finally, we will play several demo videos to demonstrate how we can remotely access some Smart Speaker Root permissions and use smart speakers for eavesdropping and playing voice."
Interactive real time dashboards on data streams using Kafka, Druid, and Supe...DataWorks Summit
When interacting with analytics dashboards, in order to achieve a smooth user experience, two major key requirements are quick response time and data freshness. To meet the requirements of creating fast interactive BI dashboards over streaming data, organizations often struggle with selecting a proper serving layer.
Cluster computing frameworks such as Hadoop or Spark work well for storing large volumes of data, although they are not optimized for making it available for queries in real time. Long query latencies also make these systems suboptimal choices for powering interactive dashboards and BI use cases.
This talk presents an open source real time data analytics stack using Apache Kafka, Druid, and Superset. The stack combines the low-latency streaming and processing capabilities of Kafka with Druid, which enables immediate exploration and provides low-latency queries over the ingested data streams. Superset provides the visualization and dashboarding that integrates nicely with Druid. In this talk we will discuss why this architecture is well suited to interactive applications over streaming data, present an end-to-end demo of complete stack, discuss its key features, and discuss performance characteristics from real-world use cases.
Speaker
Nishant Bangarwa, Software Engineer, Hortonworks
Kostas Tzoumas - Stream Processing with Apache Flink®Ververica
In this talk the basics on Apache Flink are covered: why the project exists, where it came from, what gap does it fill, how it differs from all the other stream processing projects, what is it being used for, and where is it headed. In short, streaming data is now the new trend, and for very good reasons. Most data is produced continuously, and it makes sense that it is processed and analysed continuously. Whether it is the need for more real-time products, adopting micro-services, or building continuous applications, stream processing technology offers to simplify the data infrastructure stack and reduce the latency to decisions.
This is a hands-on workshop that will teach you how to build a Web application that incorporate real-time communication between the client application running on the browser and the back-end server.
We will start with an overview of technologies and tools available for building real-time Web apps, what’s involved, the basics, and the gotchas. Next, we will build, in real-time, a real-time chat application using the python (Tornado) + socket.io + Backbone stack. Why not Node.js, you might ask. Simple: it’s too easy, too popular, and and not super-stable or secure. But you’re welcome to use Node.js as the backend in your own apps!
What is Blockchain and why should we care?Paul Johnston
A talk trying to explain blockchain in 10 minutes (!) and some of the use cases it is being put to. Mainly for a non-technical and faith based audience. tl;dr just use a database and don't get involved in crypto.
Head in the_clouds_feet_firmly_on_the_ground mar.pptBill Lublin
Presentation on cloud computing for small businesses and professionals created completely usng cloud based resources for the Michigan Association of REALTORS broker summit April 2011
US Ignite applications are highlighting the advantages of an Internet which is absolutely reliable when you need it to be, never makes you wait. connects seamlessly to your smart things, and can easily deliver big data experiences to and from your home, small business, or wireless device. These requirements suggest a locavore architecture for US Ignite communities. Glenn Ricart, US Ignite
Interactive real-time dashboards on data streams using Kafka, Druid, and Supe...DataWorks Summit
When interacting with analytics dashboards, in order to achieve a smooth user experience, two major key requirements are quick response time and data freshness. To meet the requirements of creating fast interactive BI dashboards over streaming data, organizations often struggle with selecting a proper serving layer.
Cluster computing frameworks such as Hadoop or Spark work well for storing large volumes of data, although they are not optimized for making it available for queries in real time. Long query latencies also make these systems suboptimal choices for powering interactive dashboards and BI use cases.
This talk presents an open source real-time data analytics stack using Apache Kafka, Druid, and Superset. The stack combines the low-latency streaming and processing capabilities of Kafka with Druid, which enables immediate exploration and provides low-latency queries over the ingested data streams. Superset provides the visualization and dashboarding that integrates nicely with Druid. In this talk we will discuss why this architecture is well suited to interactive applications over streaming data, present an end-to-end demo of complete stack, discuss its key features, and discuss performance characteristics from real-world use cases. NISHANT BANGARWA, Software engineer, Hortonworks
Aljoscha Krettek offers a very short introduction to stream processing before diving into writing code and demonstrating the features in Apache Flink that make truly robust stream processing possible, with a focus on correctness and robustness in stream processing.
All of this will be done in the context of a real-time analytics application that we’ll be modifying on the fly based on the topics we’re working though, as Aljoscha exercises Flink’s unique features, demonstrates fault recovery, clearly explains why event time is such an important concept in robust, stateful stream processing, and covers the features you need in a stream processor to do robust, stateful stream processing in production.
We’ll also use a real-time analytics dashboard to visualize the results we’re computing in real time, allowing us to easily see the effects of the code we’re developing as we go along.
Topics include:
* Apache Flink
* Stateful stream processing
* Event time versus processing time
* Fault tolerance
* State management in the face of faults
* Savepoints
* Data reprocessing
Caterpillar’s move to the cloud: cutting edge tools for a cutting-edge businessDataWorks Summit
Telematics information has been flowing from our assets to Caterpillar via email, satellite, cell tower, and direct connect for over 20 years. Our systems have morphed from a single Unix box to the Azure cloud, from Oracle to Azure Table Storage to SQL Server to HBase/Phoenix, and from 10 to 500 messages a second. This presentation will track where we came from and, more specifically, our current system of Azure event hubs, Storm topologies, Phoenix backend, and streaming with Spark.
In this session, learn all about the Caterpillar journey, from where we came from to where we are today—including lessons learned along the way. This presentation is aimed at those wanting to understand the interrelationship between change, technology, and platform decisions. In addition, we will show how modern tools can dramatically reduce the complexity and time associated with IoT solution deployment. The Audience should leave the session feeling that anyone can do IoT. Current tools are easier, faster, and better than ever. MARK JUCHEMS, Digital Technical Specialist, Caterpillar and JUSTIN RICE.
Pushing the limits of ePRTC: 100ns holdover for 100 daysAdtran
At WSTS 2024, Alon Stern explored the topic of parametric holdover and explained how recent research findings can be implemented in real-world PNT networks to achieve 100 nanoseconds of accuracy for up to 100 days.
More Related Content
Similar to QCONSF - FaunaDB Deterministic Transactions
Kostas Tzoumas - Stream Processing with Apache Flink®Ververica
In this talk the basics on Apache Flink are covered: why the project exists, where it came from, what gap does it fill, how it differs from all the other stream processing projects, what is it being used for, and where is it headed. In short, streaming data is now the new trend, and for very good reasons. Most data is produced continuously, and it makes sense that it is processed and analysed continuously. Whether it is the need for more real-time products, adopting micro-services, or building continuous applications, stream processing technology offers to simplify the data infrastructure stack and reduce the latency to decisions.
This is a hands-on workshop that will teach you how to build a Web application that incorporate real-time communication between the client application running on the browser and the back-end server.
We will start with an overview of technologies and tools available for building real-time Web apps, what’s involved, the basics, and the gotchas. Next, we will build, in real-time, a real-time chat application using the python (Tornado) + socket.io + Backbone stack. Why not Node.js, you might ask. Simple: it’s too easy, too popular, and and not super-stable or secure. But you’re welcome to use Node.js as the backend in your own apps!
What is Blockchain and why should we care?Paul Johnston
A talk trying to explain blockchain in 10 minutes (!) and some of the use cases it is being put to. Mainly for a non-technical and faith based audience. tl;dr just use a database and don't get involved in crypto.
Head in the_clouds_feet_firmly_on_the_ground mar.pptBill Lublin
Presentation on cloud computing for small businesses and professionals created completely usng cloud based resources for the Michigan Association of REALTORS broker summit April 2011
US Ignite applications are highlighting the advantages of an Internet which is absolutely reliable when you need it to be, never makes you wait. connects seamlessly to your smart things, and can easily deliver big data experiences to and from your home, small business, or wireless device. These requirements suggest a locavore architecture for US Ignite communities. Glenn Ricart, US Ignite
Interactive real-time dashboards on data streams using Kafka, Druid, and Supe...DataWorks Summit
When interacting with analytics dashboards, in order to achieve a smooth user experience, two major key requirements are quick response time and data freshness. To meet the requirements of creating fast interactive BI dashboards over streaming data, organizations often struggle with selecting a proper serving layer.
Cluster computing frameworks such as Hadoop or Spark work well for storing large volumes of data, although they are not optimized for making it available for queries in real time. Long query latencies also make these systems suboptimal choices for powering interactive dashboards and BI use cases.
This talk presents an open source real-time data analytics stack using Apache Kafka, Druid, and Superset. The stack combines the low-latency streaming and processing capabilities of Kafka with Druid, which enables immediate exploration and provides low-latency queries over the ingested data streams. Superset provides the visualization and dashboarding that integrates nicely with Druid. In this talk we will discuss why this architecture is well suited to interactive applications over streaming data, present an end-to-end demo of complete stack, discuss its key features, and discuss performance characteristics from real-world use cases. NISHANT BANGARWA, Software engineer, Hortonworks
Aljoscha Krettek offers a very short introduction to stream processing before diving into writing code and demonstrating the features in Apache Flink that make truly robust stream processing possible, with a focus on correctness and robustness in stream processing.
All of this will be done in the context of a real-time analytics application that we’ll be modifying on the fly based on the topics we’re working though, as Aljoscha exercises Flink’s unique features, demonstrates fault recovery, clearly explains why event time is such an important concept in robust, stateful stream processing, and covers the features you need in a stream processor to do robust, stateful stream processing in production.
We’ll also use a real-time analytics dashboard to visualize the results we’re computing in real time, allowing us to easily see the effects of the code we’re developing as we go along.
Topics include:
* Apache Flink
* Stateful stream processing
* Event time versus processing time
* Fault tolerance
* State management in the face of faults
* Savepoints
* Data reprocessing
Caterpillar’s move to the cloud: cutting edge tools for a cutting-edge businessDataWorks Summit
Telematics information has been flowing from our assets to Caterpillar via email, satellite, cell tower, and direct connect for over 20 years. Our systems have morphed from a single Unix box to the Azure cloud, from Oracle to Azure Table Storage to SQL Server to HBase/Phoenix, and from 10 to 500 messages a second. This presentation will track where we came from and, more specifically, our current system of Azure event hubs, Storm topologies, Phoenix backend, and streaming with Spark.
In this session, learn all about the Caterpillar journey, from where we came from to where we are today—including lessons learned along the way. This presentation is aimed at those wanting to understand the interrelationship between change, technology, and platform decisions. In addition, we will show how modern tools can dramatically reduce the complexity and time associated with IoT solution deployment. The Audience should leave the session feeling that anyone can do IoT. Current tools are easier, faster, and better than ever. MARK JUCHEMS, Digital Technical Specialist, Caterpillar and JUSTIN RICE.
Pushing the limits of ePRTC: 100ns holdover for 100 daysAdtran
At WSTS 2024, Alon Stern explored the topic of parametric holdover and explained how recent research findings can be implemented in real-world PNT networks to achieve 100 nanoseconds of accuracy for up to 100 days.
GraphSummit Singapore | The Art of the Possible with Graph - Q2 2024Neo4j
Neha Bajwa, Vice President of Product Marketing, Neo4j
Join us as we explore breakthrough innovations enabled by interconnected data and AI. Discover firsthand how organizations use relationships in data to uncover contextual insights and solve our most pressing challenges – from optimizing supply chains, detecting fraud, and improving customer experiences to accelerating drug discoveries.
Removing Uninteresting Bytes in Software FuzzingAftab Hussain
Imagine a world where software fuzzing, the process of mutating bytes in test seeds to uncover hidden and erroneous program behaviors, becomes faster and more effective. A lot depends on the initial seeds, which can significantly dictate the trajectory of a fuzzing campaign, particularly in terms of how long it takes to uncover interesting behaviour in your code. We introduce DIAR, a technique designed to speedup fuzzing campaigns by pinpointing and eliminating those uninteresting bytes in the seeds. Picture this: instead of wasting valuable resources on meaningless mutations in large, bloated seeds, DIAR removes the unnecessary bytes, streamlining the entire process.
In this work, we equipped AFL, a popular fuzzer, with DIAR and examined two critical Linux libraries -- Libxml's xmllint, a tool for parsing xml documents, and Binutil's readelf, an essential debugging and security analysis command-line tool used to display detailed information about ELF (Executable and Linkable Format). Our preliminary results show that AFL+DIAR does not only discover new paths more quickly but also achieves higher coverage overall. This work thus showcases how starting with lean and optimized seeds can lead to faster, more comprehensive fuzzing campaigns -- and DIAR helps you find such seeds.
- These are slides of the talk given at IEEE International Conference on Software Testing Verification and Validation Workshop, ICSTW 2022.
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Enchancing adoption of Open Source Libraries. A case study on Albumentations.AIVladimir Iglovikov, Ph.D.
Presented by Vladimir Iglovikov:
- https://www.linkedin.com/in/iglovikov/
- https://x.com/viglovikov
- https://www.instagram.com/ternaus/
This presentation delves into the journey of Albumentations.ai, a highly successful open-source library for data augmentation.
Created out of a necessity for superior performance in Kaggle competitions, Albumentations has grown to become a widely used tool among data scientists and machine learning practitioners.
This case study covers various aspects, including:
People: The contributors and community that have supported Albumentations.
Metrics: The success indicators such as downloads, daily active users, GitHub stars, and financial contributions.
Challenges: The hurdles in monetizing open-source projects and measuring user engagement.
Development Practices: Best practices for creating, maintaining, and scaling open-source libraries, including code hygiene, CI/CD, and fast iteration.
Community Building: Strategies for making adoption easy, iterating quickly, and fostering a vibrant, engaged community.
Marketing: Both online and offline marketing tactics, focusing on real, impactful interactions and collaborations.
Mental Health: Maintaining balance and not feeling pressured by user demands.
Key insights include the importance of automation, making the adoption process seamless, and leveraging offline interactions for marketing. The presentation also emphasizes the need for continuous small improvements and building a friendly, inclusive community that contributes to the project's growth.
Vladimir Iglovikov brings his extensive experience as a Kaggle Grandmaster, ex-Staff ML Engineer at Lyft, sharing valuable lessons and practical advice for anyone looking to enhance the adoption of their open-source projects.
Explore more about Albumentations and join the community at:
GitHub: https://github.com/albumentations-team/albumentations
Website: https://albumentations.ai/
LinkedIn: https://www.linkedin.com/company/100504475
Twitter: https://x.com/albumentations
Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Threats to mobile devices are more prevalent and increasing in scope and complexity. Users of mobile devices desire to take full advantage of the features
available on those devices, but many of the features provide convenience and capability but sacrifice security. This best practices guide outlines steps the users can take to better protect personal devices and information.
How to Get CNIC Information System with Paksim Ga.pptxdanishmna97
Pakdata Cf is a groundbreaking system designed to streamline and facilitate access to CNIC information. This innovative platform leverages advanced technology to provide users with efficient and secure access to their CNIC details.
A tale of scale & speed: How the US Navy is enabling software delivery from l...sonjaschweigert1
Rapid and secure feature delivery is a goal across every application team and every branch of the DoD. The Navy’s DevSecOps platform, Party Barge, has achieved:
- Reduction in onboarding time from 5 weeks to 1 day
- Improved developer experience and productivity through actionable findings and reduction of false positives
- Maintenance of superior security standards and inherent policy enforcement with Authorization to Operate (ATO)
Development teams can ship efficiently and ensure applications are cyber ready for Navy Authorizing Officials (AOs). In this webinar, Sigma Defense and Anchore will give attendees a look behind the scenes and demo secure pipeline automation and security artifacts that speed up application ATO and time to production.
We will cover:
- How to remove silos in DevSecOps
- How to build efficient development pipeline roles and component templates
- How to deliver security artifacts that matter for ATO’s (SBOMs, vulnerability reports, and policy evidence)
- How to streamline operations with automated policy checks on container images
GridMate - End to end testing is a critical piece to ensure quality and avoid...ThomasParaiso2
End to end testing is a critical piece to ensure quality and avoid regressions. In this session, we share our journey building an E2E testing pipeline for GridMate components (LWC and Aura) using Cypress, JSForce, FakerJS…
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
zkStudyClub - Reef: Fast Succinct Non-Interactive Zero-Knowledge Regex ProofsAlex Pruden
This paper presents Reef, a system for generating publicly verifiable succinct non-interactive zero-knowledge proofs that a committed document matches or does not match a regular expression. We describe applications such as proving the strength of passwords, the provenance of email despite redactions, the validity of oblivious DNS queries, and the existence of mutations in DNA. Reef supports the Perl Compatible Regular Expression syntax, including wildcards, alternation, ranges, capture groups, Kleene star, negations, and lookarounds. Reef introduces a new type of automata, Skipping Alternating Finite Automata (SAFA), that skips irrelevant parts of a document when producing proofs without undermining soundness, and instantiates SAFA with a lookup argument. Our experimental evaluation confirms that Reef can generate proofs for documents with 32M characters; the proofs are small and cheap to verify (under a second).
Paper: https://eprint.iacr.org/2023/1886
2. 2
Chris Anderson
Director of Developer Evangelism at Fauna
- Cofounder of Couchbase
- Architect of Couchbase Mobile
- Bachelor’s degree in philosophy from Reed
College
- O'Reilly technical book author
- Leads Fauna's developer community
About the Speakers
7. Unified vs Partitioned
Consensus
Thanks to Daniel Abadi and for contributing research
and material. Learn more from this blog post
https://fauna.com/blog/faunadb-transaction-protocol
Fauna was founded by Matt/Evan based on their experiences at Twitter. We’ve raised almost $30M till date from premier VCs and are based in SF. Every application needs a database. Apps have evolved. Databases haven’t kept pace. We’re building a new database we’re excited to tell you about it. Interesting to note that Google built Spanner and still invested in Fauna.
Fauna was founded by Matt/Evan based on their experiences at Twitter. We’ve raised almost $30M till date from premier VCs and are based in SF. Every application needs a database. Apps have evolved. Databases haven’t kept pace. We’re building a new database we’re excited to tell you about it. Interesting to note that Google built Spanner and still invested in Fauna.
Fauna was founded by Matt/Evan based on their experiences at Twitter. We’ve raised almost $30M till date from premier VCs and are based in SF. Every application needs a database. Apps have evolved. Databases haven’t kept pace. We’re building a new database we’re excited to tell you about it. Interesting to note that Google built Spanner and still invested in Fauna.
Multi-model interface: read and write documents, but query in multiple models such as relational, graph, etc.
Distributed ACID transactions: patent pending algorithm ensures ACID in every cluster configuration
High security: row-level identity, authentication, and access control protects against application error and transport encryption protects against adversaries [end-to-end encryption coming soon].
Horizontal scalability: dynamically scale from a single machine to multiple datacenters on commodity and cloud infrastructure with no downtime.
High availability: redundant, self-healing clustering reacts to machine, datacenter, and network failures in milliseconds with no loss of liveness or durability.
Temporality: manage data retention and run queries on historical data at any point-in-time or as change feeds [realtime streaming coming soon].
Multi-tenancy: operate a shared services environment safely with dynamic quality-of-service management and chargeback consumption to LoBs.
- Operational simplicity: Rest easy with self-driving cluster management that does what you mean every time with a few strokes of the command line
Any questions thus far? Otherwise, let’s jump into some product details.
Any questions thus far? Otherwise, let’s jump into some product details.
The coordinator executes the transaction code. In most cases, it will not have all of the relevant data locally, and thus will have to read data from nearby servers within the same replica that have the required partitions of data that are involved in the transaction request. It chooses a recent snapshot time (this choice can be arbitrary), and makes requests to the nearby servers to read data as of that snapshot. In our example, let’s assume that the coordinator for each of our two competing transactions chooses to read as of T9 (the most recent transaction in the global transaction log):
The coordinator executes the transaction code. In most cases, it will not have all of the relevant data locally, and thus will have to read data from nearby servers within the same replica that have the required partitions of data that are involved in the transaction request. It chooses a recent snapshot time (this choice can be arbitrary), and makes requests to the nearby servers to read data as of that snapshot. In our example, let’s assume that the coordinator for each of our two competing transactions chooses to read as of T9 (the most recent transaction in the global transaction log):
Nvidia is one of our marquee customers. Their most recent customer-facing applications are based on Fauna, where in it is used for user identity management. They started out with a single site and quickly scaled worldwide to support their explosive user growth. The numbers speak for themselves. The identity service is now in use by multiple consumer facing Nvdia apps. It just works.
Most importantly, they run the entire cluster with just one person who is dedicated half time to this role. No outages yet.
About the User
Nextdoor is a large social network provider, focusing on neighborhood conversations.
Project Overview
Nextdoor’s neighborhood organization means municipal and safety services see them as a useful channel for connecting with residents. One of the most heavily used features in Nextdoor app is the ability for these local government services to send broadcast alerts to users in particular areas. However, the queries to compile lists of users based on group membership are complex and were creating performance and operational headaches with their existing Postgres deployment. Similar behavior was also seen in other portions of the Nextdoor app. Therefore, Nextdoor embarked on an effort to create a new “groups subsystem” to offload such queries and minimize impact on the application’s performance.
Requirements and Challenges
Nextdoor had multiple business and technical requirements driving the shape of the groups subsystem.
Firstly, Nextdoor is seeing a boom the usage of its mobile app. They anticipate traffic to the group subsystem to increase steadily. The subsystem itself is expected to find use in multiple functions in the Nextdoor app. Therefore, the subsystem must be supported by a database that can scale up with the application traffic, without custom hardware or specialized solutions.
Secondly, Nextdoor mobile app serves users distributed globally. As such, they wanted data to be available worldwide, and highly available in multiple regions.
Thirdly, group subsystem queries are IO intensive, including index lookups, nested joins and set intersections. The group membership model is graph-like, with users as members of neighborhoods, but also of other groups. Neighborhoods are part of larger groups, which can also be nested into regions. Nextdoor required a database with support for complex queries.
Fourthly, Nextdoor is built in the Cloud. They wanted to adopt a database backed that would operate in the cloud as well, thereby minimizing administrative and operational overheads associated with running their application.
Lastly, various teams had been migrating workloads from RDBMS to NoSQL for almost a decade, to take advantage of the scalability of NoSQL systems. What remained were the workloads that require relational features like joins and ACID transactions, leaving Nextdoor with a hodgepodge of Postgres and NoSQL databases. Nextdoor saw the group subsystems effort as an opportunity for consolidating workloads to lower their total cost of ownership.
Why Fauna
FaunaDB gave Nextdoor the general purpose platform they needed for running mission critical workloads in cloud-native environments.
Unlike Postgres, FaunaDB was designed from the ground up as a cloud-native and horizontally scalable database. It delivers the same set of data management capabilities, no matter the data distribution topology. Most importantly, it does so without sacrificing relational features desired by Nextdoor.
Robust multi-region replication with strong consistency means that data committed to FaunaDB is available across all regions, so data is correct and complete even in the face of disasters. This gave Nextdoor the horizontal scalability they require.
FaunaDB features a multi-model interface that includes relational primitives such as ACID transactions, consistent indexes, joins, as well as document- and graph-styled querying, all stored with configurable temporal snapshot retention. Nextdoor’s groups subsystem queries include nested joins and set intersections -- a perfect fit for FaunaDB’s expressive query language.
In FaunaDB, Nextdoor a data platform designed to grow with the business, not just a scalable transaction engine with powerful queries. The full suite of platform features include multi-tenancy and object level security mean Nextdoor can expand their FaunaDB installation to support more applications and use cases.
Results
By choosing FaunaDB for the group subsystem, Nextdoor was able to isolate the workload so that group queries do not contend with other application traffic. FaunaDB’s query language allows them to express complex queries and compose queries programmatically. This flexibility means Nextdoor can expand the use cases for targeted content, while also exploring more ways to use FaunaDB. Because a single FaunaDB cluster is designed to support multi-tenancy, they can easily add new workloads while continuing to grow the groups subsystem.
I just added this use case as it relates to what we talked about earlier with the evolution of business application platforms and the latest, voice apps.
VoiceConnect is a new company that is creating cool applications for Amazon’s Alexa platform.
They wanted to use Amazon AWS and in particular Amazon Lambda to create a serverless architecture
Any questions thus far? Otherwise, let’s jump into some product details.
Fauna’s transactions are key to its correctness and productivity benefits. We have built a cutting-edge system designed to deliver fully ACID-compliant transactions with as few tradeoffs as possible. It completely eliminates restrictions other systems must place on transaction functionality, such as limiting transactions to a single record or a single shard.
The way this works is that a Fauna cluster internally manages a distributed write ahead log which all transactions are written to. The log’s throughput scales with the size of the cluster, which eliminates any sort of capped transaction throughput that other systems suffer from.
The log has the important job of ordering all write transactions with respect to each other. This provides the ACID property of strict serializability. Once transactions are written to the log, each node can then independently play through its set of transactions based on the data it owns.
One other goal we achieved was to reduce transaction latency to the minimum possible. In a global environment, network round-trips are very costly. Every other system based on two-phased commit (such as spanner or cockroachdb) requires at least two global round trips of communication, but write transactions in Fauna require only one. Furthermore, this design gives you consistent, fast, locally served reads.
As you saw in the clustering sequence, we’ve worked hard to make Fauna the simplest possible database to operate and scale. You can literally setup a 5 node cluster replicated across 5 regions within minutes. Commands are coarse grained and easily integrated into your devops workflow/automation. The build illustrates the simplicity – add a node to an existing cluster and the system does the rest
Unlike MongoDB, Cassandra, YugaByte: FaunaDB secures client access by default (it seems like only SQL systems and DBaaS actually do this). Fauna offers Fine grained per-record access control policy (Oracle, Firebase, Postgres w/ an extension do this). Unique to Fauna: end-user credentials management and access control. Temporal data model enables detailed auditing – you can see how your data has evolved within a time line.
Also unique to Fauna is the QoS tied into security model. It ensures that a single client cannot take over your cluster.
We will introduce end to end encryption soon, and that will create the more comprehensive security capabilities in the market.
Fauna’s multitenancy support is designed to solve two problems, which are really two sides of the same coin:
Different teams within an organization cannot share hardware resource due to the lack of performance isolation, leads to inefficient hardware utilization in support of a data silo per team
Shared systems are extremely sensitive changes in workload, which requires strict control over who has access to transactional data. It’s easy for a prototype or an analytics job to take a production system offline.
Fauna solves both of these problems by letting operators allocate a finite amount of cluster resources to a given team or application.
You can have a higher leverage ops team managing fewer clusters (self-service dev-ops)
You can provide wider access to real-time transactional datasets
No downtime is required to onboard or manage tenants in a cluster. It all just works. (That is the theme here.)
Fauna features built—in temporality. Each write is an update, instead of an overwrite. This enables change tracking for all data based on your retention policies. Temporality enables new use cases such as fine grained auditing of data, social activity feeds, and edge computing scenarios with occasionally connected devices.
Quick over view of a Fauna node. Unlike other databases, faunadb has a self contained node with all the core services services built in. You deploy a node to get started. And you add more nodes to scale. They work in a peer-to-peer fashion for scale and availability. There are no additional pieces to install. More about that in a bit.
Few things to point out:
The query interface is built to integrate very easily into modern programming patterns. As we noted, combines documents with relational indexing to give you a structure that significantly simplifies your data model
Each Fauna client has a unique identifier. The identifier is associated with policy that determines level of access as well as tenant priority for QoS
Queries flow to a scheduler, that determines which workloads to prioritize based on the QoS settings for that client.
FATE is our secret sauce. Based on a patent pending technology, it processes strongly consistent ACID transactions using a distributed write ahead log. You scale out the log for greater throughput. We can white board the details in a follow on conversation if there is interest. Much to be discuss here.
The cluster manager takes care all the replication, and management of nodes. It is highly optimized for performance and minimal chatter.
To summarize, FaunaDB—real customers, real production deployments