CQL is a structured query language for Cassandra that replaces clients. It features a familiar SQL-like syntax for querying Cassandra in a user-friendly way. Key CQL commands include CREATE KEYSPACE, CREATE COLUMNFAMILY, SELECT, UPDATE, DELETE, BATCH, and DROP. Consistency levels can be specified for commands using USING and valid options are CONSISTENCY ZERO to CONSISTENCY DCQUORUMSYNC.
Archaic database technologies just don't scale under the always on, distributed demands of modern IOT, mobile and web applications. We'll start this Intro to Cassandra by discussing how its approach is different and why so many awesome companies have migrated from the cold clutches of the relational world into the warm embrace of peer to peer architecture. After this high-level opening discussion, we'll briefly unpack the following:
• Cassandra's internal architecture and distribution model
• Cassandra's Data Model
• Reads and Writes
Archaic database technologies just don't scale under the always on, distributed demands of modern IOT, mobile and web applications. We'll start this Intro to Cassandra by discussing how its approach is different and why so many awesome companies have migrated from the cold clutches of the relational world into the warm embrace of peer to peer architecture. After this high-level opening discussion, we'll briefly unpack the following:
• Cassandra's internal architecture and distribution model
• Cassandra's Data Model
• Reads and Writes
We will show how Galera Cluster executes DDLs in a safe, consistent manner across all the nodes in the cluster, and the differences with stand-alone MySQL. We will discuss how to prepare for and successfully carry out a schema upgrade and the considerations that need to be taken into account during the process.
Basic Introduction to Cassandra with Architecture and strategies.
with big data challenge. What is NoSQL Database.
The Big Data Challenge
The Cassandra Solution
The CAP Theorem
The Architecture of Cassandra
The Data Partition and Replication
Introduction to memcached, a caching service designed for optimizing performance and scaling in the web stack, seen from perspective of MySQL/PHP users. Given for 2nd year students of professional bachelor in ICT at Kaho St. Lieven, Gent.
202201 AWS Black Belt Online Seminar Apache Spark Performnace Tuning for AWS ...Amazon Web Services Japan
AWS Black Belt Online Seminarの最新コンテンツ: https://aws.amazon.com/jp/aws-jp-introduction/#new
過去に開催されたオンラインセミナーのコンテンツ一覧: https://aws.amazon.com/jp/aws-jp-introduction/aws-jp-webinar-service-cut/
Troubleshooting Kerberos in Hadoop: Taming the BeastDataWorks Summit
Kerberos is the ubiquitous authentication mechanism when it comes to secure any Hadoop Services. With recent updates in Hadoop core and various Apache Hadoop components, inherent Kerberos support has matured and has come a long way.
Understanding & configuring Kerberos is still a challenge but even more painful & frustrating is troubleshooting a Kerberos issue. There are lot of things (small & big) that can go wrong (and will go wrong!). This talk covers the Kerberos debugging part in detail and discusses the tools & tricks that can be used to narrow down any Kerberos issue.
Rather than discussing the issues and their resolution, we will focus on how to approach a Kerberos problem and do's / dont's in Kerberos scene. This talk will provide a step by step guide that will equip the audience for troubleshooting future Kerberos problems.
Agenda is to discuss:
- Systematic approach to Kerberos troubleshooting
- Kerberos Tools available in Hadoop arsenal
- Tips & Tricks to narrow down Kerberos issues quickly
- Some nasty Kerberos issues from Support trenches
Some prior knowledge on Kerberos basics will be appreciated but is not a prerequisite.
Speaker:
Vipin Rathor, Sr. Product Specialist (HDP Security), Hortonworks
Replication and Consistency in Cassandra... What Does it All Mean? (Christoph...DataStax
Many users set the replication strategy on their keyspaces to NetworkTopologyStrategy and move on with modeling their data or developing the next big application. But what does that replication strategy really mean? Let's explore replication and consistency in Cassandra.
How are replicas chosen?
Where does node topology (location in a cluster) come into play?
What can I expect when nodes are down I'm querying with a Consistency Level of local quorum?
If a rack goes down can I still respond to quorum queries?
These questions may be simple to test, but have nuances that should be understood. This talk will dive into these topics in a visual and technical manner. Seasoned Cassandra veterans and new users alike stand to gain knowledge about these critical Cassandra components.
About the Speaker
Christopher Bradford Solutions Architect, DataStax
High performance drives Christopher Bradford. He has worked across various industries including the federal government, higher education, social news syndication, low latency HD video delivery and usability research. Chris combines application engineering principles and systems administration experience to design and implement performant systems. He has architected applications and systems to create highly available, fault tolerant, distributed services in a myriad environments.
Event Sourcing, Domain Driven Design, and Command Query Responsibility Segregation – we hear all of these technologies used together frequently, but how do they actually work together? How do you manage complex co-ordination in a CQRS system?
In this talk, we will discuss a real world example of DDD with ES and CQRS written in F# - a functional first language on the .NET Framework. We’ll take a deep dive into the F# algebraic type system that constructs the domain model. Also, the explanation of the abstract notion of a DDD aggregate root compared to the CQRS implementation of an aggregate root. Finishing with how sagas and triggers facilitate poly-aggregate communication.
Benchmarking is hard. Benchmarking databases, harder. Benchmarking databases that follow different approaches (relational vs document) is even harder.
But the market demands these kinds of benchmarks. Despite the different data models that MongoDB and PostgreSQL expose, many organizations face the challenge of picking either technology. And performance is arguably the main deciding factor.
Join this talk to discover the numbers! After $30K spent on public cloud and months of testing, there are many different scenarios to analyze. Benchmarks on three distinct categories have been performed: OLTP, OLAP and comparing MongoDB 4.0 transaction performance with PostgreSQL's.
What would be faster, MongoDB or PostgreSQL?
Data driven organizations can be challenged to deliver new and growing business intelligence requirements from existing data warehouse platforms, constrained by lack of scalability and performance. The solution for customers is a data warehouse that scales for real-time demands and uses resources in a more optimized and cost-effective manner. Join Snowflake, AWS and Ask.com to learn how Ask.com enhanced BI service levels and decreased expenses while meeting demand to collect, store and analyze over a terabyte of data per day. Snowflake Computing delivers a fast and flexible elastic data warehouse solution that reduces complexity and overhead, built on top of the elasticity, flexibility, and resiliency of AWS.
Join us to learn:
• Learn how Ask.com eliminates data redundancy, and simplifies and accelerates data load, unload, and administration
• Learn how to support new and fluid data consumption patterns with consistently high performance
• Best practices for scaling high data volume on Amazon EC2 and Amazon S3
Who should attend: CIOs, CTOs, CDOs, Directors of IT, IT Administrators, IT Architects, Data Warehouse Developers, Database Administrators, Business Analysts and Data Architects
Deploying MongoDB sharded clusters easily with Terraform and AnsibleAll Things Open
Presented by: Ivan Groenewold
Presented at the All Things Open 2021
Raleigh, NC, USA
Raleigh Convention Center
Abstract: Installing big clusters is always a challenge, and can be a very time-consuming task. At a high level, we need to provision the hardware, install the software, configure monitoring, and set up a backup process.
In this talk we will see how to develop a complete pipeline to deploy MongoDB sharded clusters at the push of a button, that can accomplish all of these tasks for you.
By combining Terraform for the hardware provisioning, and Ansible for the software installation, we can completely automate the process, saving time and providing a standardized reusable solution.
Reliable Event Delivery in Apache Kafka Based on Retry Policy and Dead Letter...HostedbyConfluent
In any sufficiently complex IT system, at some point, we realize that we have to answer the question: what if something goes wrong. We can do nothing and swallow the erroneous events. We can also stop processing and wait for the fix.
But what if we want some sensible retry policy, and in the absence of success - postponing events in a Dead Letter Queue (DLQ). It turns out that Kafka has no support for such a scenario. In this short presentation, I will talk about a pattern based on three topics: operational, retry, and DLQ, and how it can be handled programmatically.
Based on a real-world example of a messaging system created for one of the biggest European banks.
We will show how Galera Cluster executes DDLs in a safe, consistent manner across all the nodes in the cluster, and the differences with stand-alone MySQL. We will discuss how to prepare for and successfully carry out a schema upgrade and the considerations that need to be taken into account during the process.
Basic Introduction to Cassandra with Architecture and strategies.
with big data challenge. What is NoSQL Database.
The Big Data Challenge
The Cassandra Solution
The CAP Theorem
The Architecture of Cassandra
The Data Partition and Replication
Introduction to memcached, a caching service designed for optimizing performance and scaling in the web stack, seen from perspective of MySQL/PHP users. Given for 2nd year students of professional bachelor in ICT at Kaho St. Lieven, Gent.
202201 AWS Black Belt Online Seminar Apache Spark Performnace Tuning for AWS ...Amazon Web Services Japan
AWS Black Belt Online Seminarの最新コンテンツ: https://aws.amazon.com/jp/aws-jp-introduction/#new
過去に開催されたオンラインセミナーのコンテンツ一覧: https://aws.amazon.com/jp/aws-jp-introduction/aws-jp-webinar-service-cut/
Troubleshooting Kerberos in Hadoop: Taming the BeastDataWorks Summit
Kerberos is the ubiquitous authentication mechanism when it comes to secure any Hadoop Services. With recent updates in Hadoop core and various Apache Hadoop components, inherent Kerberos support has matured and has come a long way.
Understanding & configuring Kerberos is still a challenge but even more painful & frustrating is troubleshooting a Kerberos issue. There are lot of things (small & big) that can go wrong (and will go wrong!). This talk covers the Kerberos debugging part in detail and discusses the tools & tricks that can be used to narrow down any Kerberos issue.
Rather than discussing the issues and their resolution, we will focus on how to approach a Kerberos problem and do's / dont's in Kerberos scene. This talk will provide a step by step guide that will equip the audience for troubleshooting future Kerberos problems.
Agenda is to discuss:
- Systematic approach to Kerberos troubleshooting
- Kerberos Tools available in Hadoop arsenal
- Tips & Tricks to narrow down Kerberos issues quickly
- Some nasty Kerberos issues from Support trenches
Some prior knowledge on Kerberos basics will be appreciated but is not a prerequisite.
Speaker:
Vipin Rathor, Sr. Product Specialist (HDP Security), Hortonworks
Replication and Consistency in Cassandra... What Does it All Mean? (Christoph...DataStax
Many users set the replication strategy on their keyspaces to NetworkTopologyStrategy and move on with modeling their data or developing the next big application. But what does that replication strategy really mean? Let's explore replication and consistency in Cassandra.
How are replicas chosen?
Where does node topology (location in a cluster) come into play?
What can I expect when nodes are down I'm querying with a Consistency Level of local quorum?
If a rack goes down can I still respond to quorum queries?
These questions may be simple to test, but have nuances that should be understood. This talk will dive into these topics in a visual and technical manner. Seasoned Cassandra veterans and new users alike stand to gain knowledge about these critical Cassandra components.
About the Speaker
Christopher Bradford Solutions Architect, DataStax
High performance drives Christopher Bradford. He has worked across various industries including the federal government, higher education, social news syndication, low latency HD video delivery and usability research. Chris combines application engineering principles and systems administration experience to design and implement performant systems. He has architected applications and systems to create highly available, fault tolerant, distributed services in a myriad environments.
Event Sourcing, Domain Driven Design, and Command Query Responsibility Segregation – we hear all of these technologies used together frequently, but how do they actually work together? How do you manage complex co-ordination in a CQRS system?
In this talk, we will discuss a real world example of DDD with ES and CQRS written in F# - a functional first language on the .NET Framework. We’ll take a deep dive into the F# algebraic type system that constructs the domain model. Also, the explanation of the abstract notion of a DDD aggregate root compared to the CQRS implementation of an aggregate root. Finishing with how sagas and triggers facilitate poly-aggregate communication.
Benchmarking is hard. Benchmarking databases, harder. Benchmarking databases that follow different approaches (relational vs document) is even harder.
But the market demands these kinds of benchmarks. Despite the different data models that MongoDB and PostgreSQL expose, many organizations face the challenge of picking either technology. And performance is arguably the main deciding factor.
Join this talk to discover the numbers! After $30K spent on public cloud and months of testing, there are many different scenarios to analyze. Benchmarks on three distinct categories have been performed: OLTP, OLAP and comparing MongoDB 4.0 transaction performance with PostgreSQL's.
What would be faster, MongoDB or PostgreSQL?
Data driven organizations can be challenged to deliver new and growing business intelligence requirements from existing data warehouse platforms, constrained by lack of scalability and performance. The solution for customers is a data warehouse that scales for real-time demands and uses resources in a more optimized and cost-effective manner. Join Snowflake, AWS and Ask.com to learn how Ask.com enhanced BI service levels and decreased expenses while meeting demand to collect, store and analyze over a terabyte of data per day. Snowflake Computing delivers a fast and flexible elastic data warehouse solution that reduces complexity and overhead, built on top of the elasticity, flexibility, and resiliency of AWS.
Join us to learn:
• Learn how Ask.com eliminates data redundancy, and simplifies and accelerates data load, unload, and administration
• Learn how to support new and fluid data consumption patterns with consistently high performance
• Best practices for scaling high data volume on Amazon EC2 and Amazon S3
Who should attend: CIOs, CTOs, CDOs, Directors of IT, IT Administrators, IT Architects, Data Warehouse Developers, Database Administrators, Business Analysts and Data Architects
Deploying MongoDB sharded clusters easily with Terraform and AnsibleAll Things Open
Presented by: Ivan Groenewold
Presented at the All Things Open 2021
Raleigh, NC, USA
Raleigh Convention Center
Abstract: Installing big clusters is always a challenge, and can be a very time-consuming task. At a high level, we need to provision the hardware, install the software, configure monitoring, and set up a backup process.
In this talk we will see how to develop a complete pipeline to deploy MongoDB sharded clusters at the push of a button, that can accomplish all of these tasks for you.
By combining Terraform for the hardware provisioning, and Ansible for the software installation, we can completely automate the process, saving time and providing a standardized reusable solution.
Reliable Event Delivery in Apache Kafka Based on Retry Policy and Dead Letter...HostedbyConfluent
In any sufficiently complex IT system, at some point, we realize that we have to answer the question: what if something goes wrong. We can do nothing and swallow the erroneous events. We can also stop processing and wait for the fix.
But what if we want some sensible retry policy, and in the absence of success - postponing events in a Dead Letter Queue (DLQ). It turns out that Kafka has no support for such a scenario. In this short presentation, I will talk about a pattern based on three topics: operational, retry, and DLQ, and how it can be handled programmatically.
Based on a real-world example of a messaging system created for one of the biggest European banks.
Solr & Cassandra: Searching Cassandra with DataStax EnterpriseDataStax Academy
Wait! Back away from the Cassandra secondary index. It’s ok for some use cases, but it’s not an easy button. “But I need to search through a bunch of columns to look for the data… and I can’t model that in C*, even after watching all of Patrick McFadins data modeling videos. What do I do?” The answer, dear developer, is in DSE Search. With it’s easy Solr API, Lucene indexes (and fault tolerance) you can search data stored in your Cassandra database until your heart’s content. Take my hand. I will show you how.
Legacy Dependency Killer is a hand-on coding session I lead at Utah Code Camp in March 2014. The focus is on refactoring for unit testing. The seed code is available on Github in C# and there are plans to provide translations in other languages. https://github.com/KatasForLegacyCode
Apache Cassandra, part 3 – machinery, work with CassandraAndrey Lomakin
Aim of this presentation to provide enough information for enterprise architect to choose whether Cassandra will be project data store. Presentation describes each nuance of Cassandra architecture and ways to design data and work with them.
Deep dive in container service discoveryDocker, Inc.
Service discovery and traffic load-balancing in the container ecosystem relies on different technologies, such as IPVS and iptables, and container orchestrators use different approaches. This talk will present in details how Docker Swarm and Kubernetes achieve this. The talk will continue with a demo showing how applications that are not managed by Kubernetes can take advantage of its native load-balancing. Finally, it will compare these approaches to service-mesh solutions.
SenchaCon 2016: Learn the Top 10 Best ES2015 Features - Lee Boonstra Sencha
In this session, Lee will cover the top 10 new features of ECMAScript 2015, their benefits, and go through code examples of how to use them. She will also talk about ECMAScript 2015 compatibilities and incompatibilities with the most widely used browsers today, and how you should plan on developing your applications with ECMAScript 2015.
Canary Deployments for Kubernetes (KubeCon 2018 North America)Nail Islamov
https://kccna18.sched.com/event/GrUI/custom-deployment-strategies-for-kubernetes-nail-islamov-atlassian
Many tech companies are using continuous deployments (CD) to deliver changes to their users faster and more frequently. One of the challenges with automated deployments is making them safe by detecting and quickly rolling back in the event of a bad release. Standard CD practices include using canary and blue-green deployments; unfortunately, Kubernetes only supports the "rolling update" deployment strategy out of the box, which can only prevent trivial failures. Thanks to extensibility of Kubernetes, it is possible to build custom advanced deployment strategies while reusing Kubernetes core concepts. Nail Islamov will give an overview of how Deployment, ReplicaSet and Pod objects work together along with Service and Ingress, and will provide examples of implementing blue-green and canary deployments reusing these concepts by introducing extra CRD resources.
Compiler Design and Construction COSC 5353Project Instructions -LynellBull52
Compiler Design and Construction COSC 5353
Project Instructions - LOLCODE
In this project, you will develop a minimalistic compiler for a procedural programming language. You will build on the work you did in Assignment 1 and will create compiler that accepts a nontrivial subset of LOLCODE programming language: http://www.lolcode.org/
This assignment is worth 200 points, and covers two areas: syntax-directed translation, and code generation. There are 2 stages of the project, each worth 100 points:
1) Grammar and intermediate representation. Language specification is described here:
https://github.com/justinmeza/lolcode-spec/blob/master/v1.2/lolcode-spec-v1.2.md
a. Using your LOLCALC Bison grammar specification as a starting point, extend the grammar to properly recognize valid LOLCODE programs. Minor deviations are OK (eg., I would not deduct points if you do not handle comments). The grammar should be complete enough that your language subset is (1) proper LOLCODE and (2) sufficient to write nontrivial programs
b. Create a data structure for a rudimentary Abstract Syntax Tree (AST), or possibly another intermediate representation, and add code to your program to construct it. At this point, do not worry too much about having all the attributes for subsequent code generation.
c. Submit your project to Blackboard. Due date is November 8, 2021
2. Extend your project to generate working low-level code for programs. Code should be capable of execution directly or in a simulator. Due date: November 30, 2021
For syntax-directed translation, pick an intermediate representation that you see fit (such as an Abstract Syntax Tree). For code generation, output should be into a real low-level language: MIPS assembly code that run on the SPIM simulator; JVM code, LLVM assembly, or Knuth’s MIX code using appropriate emulator. Translations to another high-level language (C, JavaScript, etc.) won’t be accepted.
Submissions will be graded on how complete your implementation is. Approximate points allocation is (for info purposes only):
Accept relevant syntax and construct valid AST or another IR – 40 %.
Construct and output a valid Symbol Table – 30 %.
Generate code for variable declarations (NUMBER type) – 20 %
Generate code for expressions – 20 %
Generate code for Conditional statements – 20 %
Generate code for loops – 20 %.
Generate code for input/output (VISIBLE/GIMMEH) and string constants – 20 %.
Generate code for procedures – 20 %
Generate code for additional language features (YARN type, TROOF type, type casting, local variable scope etc.) – 10 %. each.
Optimizations – 10-50 %.
Clear report accompanying submission – 20 %.
Features above the minimum workable language subset can earn points above 100%; these points are extra credit.
Note: you should work on assignment on your own. Sharing code with other students is not allowed. Using code from the Internet is allowed and encouraged; however, you must show your significant efforts to build ...
Groovy's AST Transformation is a compile time meta-programming technique and allows developer to hook into compilation process and add new fields or methods, or modify existing methods.
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
Removing Uninteresting Bytes in Software FuzzingAftab Hussain
Imagine a world where software fuzzing, the process of mutating bytes in test seeds to uncover hidden and erroneous program behaviors, becomes faster and more effective. A lot depends on the initial seeds, which can significantly dictate the trajectory of a fuzzing campaign, particularly in terms of how long it takes to uncover interesting behaviour in your code. We introduce DIAR, a technique designed to speedup fuzzing campaigns by pinpointing and eliminating those uninteresting bytes in the seeds. Picture this: instead of wasting valuable resources on meaningless mutations in large, bloated seeds, DIAR removes the unnecessary bytes, streamlining the entire process.
In this work, we equipped AFL, a popular fuzzer, with DIAR and examined two critical Linux libraries -- Libxml's xmllint, a tool for parsing xml documents, and Binutil's readelf, an essential debugging and security analysis command-line tool used to display detailed information about ELF (Executable and Linkable Format). Our preliminary results show that AFL+DIAR does not only discover new paths more quickly but also achieves higher coverage overall. This work thus showcases how starting with lean and optimized seeds can lead to faster, more comprehensive fuzzing campaigns -- and DIAR helps you find such seeds.
- These are slides of the talk given at IEEE International Conference on Software Testing Verification and Validation Workshop, ICSTW 2022.
Climate Impact of Software Testing at Nordic Testing DaysKari Kakkonen
My slides at Nordic Testing Days 6.6.2024
Climate impact / sustainability of software testing discussed on the talk. ICT and testing must carry their part of global responsibility to help with the climat warming. We can minimize the carbon footprint but we can also have a carbon handprint, a positive impact on the climate. Quality characteristics can be added with sustainability, and then measured continuously. Test environments can be used less, and in smaller scale and on demand. Test techniques can be used in optimizing or minimizing number of tests. Test automation can be used to speed up testing.
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
Pushing the limits of ePRTC: 100ns holdover for 100 daysAdtran
At WSTS 2024, Alon Stern explored the topic of parametric holdover and explained how recent research findings can be implemented in real-world PNT networks to achieve 100 nanoseconds of accuracy for up to 100 days.
The Metaverse and AI: how can decision-makers harness the Metaverse for their...Jen Stirrup
The Metaverse is popularized in science fiction, and now it is becoming closer to being a part of our daily lives through the use of social media and shopping companies. How can businesses survive in a world where Artificial Intelligence is becoming the present as well as the future of technology, and how does the Metaverse fit into business strategy when futurist ideas are developing into reality at accelerated rates? How do we do this when our data isn't up to scratch? How can we move towards success with our data so we are set up for the Metaverse when it arrives?
How can you help your company evolve, adapt, and succeed using Artificial Intelligence and the Metaverse to stay ahead of the competition? What are the potential issues, complications, and benefits that these technologies could bring to us and our organizations? In this session, Jen Stirrup will explain how to start thinking about these technologies as an organisation.
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
A tale of scale & speed: How the US Navy is enabling software delivery from l...sonjaschweigert1
Rapid and secure feature delivery is a goal across every application team and every branch of the DoD. The Navy’s DevSecOps platform, Party Barge, has achieved:
- Reduction in onboarding time from 5 weeks to 1 day
- Improved developer experience and productivity through actionable findings and reduction of false positives
- Maintenance of superior security standards and inherent policy enforcement with Authorization to Operate (ATO)
Development teams can ship efficiently and ensure applications are cyber ready for Navy Authorizing Officials (AOs). In this webinar, Sigma Defense and Anchore will give attendees a look behind the scenes and demo secure pipeline automation and security artifacts that speed up application ATO and time to production.
We will cover:
- How to remove silos in DevSecOps
- How to build efficient development pipeline roles and component templates
- How to deliver security artifacts that matter for ATO’s (SBOMs, vulnerability reports, and policy evidence)
- How to streamline operations with automated policy checks on container images
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
PHP Frameworks: I want to break free (IPC Berlin 2024)Ralf Eggert
In this presentation, we examine the challenges and limitations of relying too heavily on PHP frameworks in web development. We discuss the history of PHP and its frameworks to understand how this dependence has evolved. The focus will be on providing concrete tips and strategies to reduce reliance on these frameworks, based on real-world examples and practical considerations. The goal is to equip developers with the skills and knowledge to create more flexible and future-proof web applications. We'll explore the importance of maintaining autonomy in a rapidly changing tech landscape and how to make informed decisions in PHP development.
This talk is aimed at encouraging a more independent approach to using PHP frameworks, moving towards a more flexible and future-proof approach to PHP development.
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
Accelerate your Kubernetes clusters with Varnish Caching
Cql – cassandra query language
1. CQL – Cassandra Query Language Courtney Robinson – crlog.info@ Eric Evans (Python tests and CQL Spec)
2. Thrift or Avro? External dependency Community activity dissipates Too generic Many user reported problems caused by Thrift or the misuse thereof... Its thrift! Its Avro! Its CQL....
3. What is it? Effectively a structured query language Replacement for clients? Not really... Attempting to push as much server-side as possible Familiar syntax User friendly API for Cassandra new comers
4. CQL V1.0.0 - Keywords USE SELECT UPDATE DELETE TRUNCATE DROP BATCH CREATE KEYSPACE CREATE COLUMNFAMILY CREATE INDEX That’s right!No INSERT...? Special statements
5. Specifying Consistency ... USING <CONSISTENCY> ... Made up of the keyword USING, followed by a consistency level identifier. Valid consistency levels are: CONSISTENCY ZERO CONSISTENCY ONE (default) CONSISTENCY QUORUM CONSISTENCY ALL CONSISTENCY DCQUORUM CONSISTENCY DCQUORUMSYNC E.gBEGIN BATCH USING CONSISTENCY ONE
6. Create keyspce CREATE KEYSPACE WITH replication_factor = AND strategy_class = [AND strategy_options. = [AND strategy_options. = ]]; E.gCREATE KEYSPACE TestKeyspaceWITH strategy_options:DC1 = '1‘ AND strategy_class = 'NetworkTopologyStrategy‘ CREATE COLUMNFAMILY [(name1 type, name2 type, ...)] [WITH keyword1 = arg1 [AND keyword2 = arg2 [AND ...]]]; Set column type: CREATE COLUMNFAMILY (name1 type, name2 type) ...; CREATE INDEX [index_name] ON <column_family> (column_name); Used to create a new, automatic secondary index for the named column. Create Column Family Create INDEX
7. USE USE <KEYSPACE>; Use keyword followed by a valid Keyspace name Set working Keyspace per-connection DROP <KEYSPACE|COLUMNFAMILY>; DROP KEYSPACE KSName; DROP COLUMNFAMILYCFName; Immediate, irreversible removal of keyspace and column family namespaces. TRUNCATE <COLUMN FAMILY> Accepts a single argument (CF) name permanently removes all data from said column family. DROP TRUNCATE
8. UPDATE (values) UPDATE [USING CONSISTENCY ] SET name1 = value1, name2 = value2 WHERE KEY = keyname; UPDATE is used to write one or more columns to a record in a Cassandra column family. No results are returned. Creates or updates rows UPDATE CFName SET ‘name' = 10 WHERE KEY = ‘1234’ UPDATE <COLUMN FAMILY> ... Begin with the UPDATE keyword followed by CF name. Followed by an optional consistency level specification.
9. BATCH UPDATES Where’s my batch mutate gone? Similar to code blocks/structs Has begining (Begin Batch) and end (Apply Batch) BEGIN BATCH [USING ] UPDATE CF1 SET name1 = value1, name2 = value2 WHERE KEY = keyname1; UPDATE CF1 SET name3 = value3 WHERE KEY = keyname2; UPDATE CF2 SET name4 = value4, name5 = value5 WHERE KEY = keyname3; APPLY BATCH BEGIN BATCH USING CONSISTENCY QUORUM UPDATE CFnameSET 1='1', 2='2', 3='3', 4='4' WHERE KEY='aa‘ UPDATE CFname2 SET 5='5', 6='6', 7='8', 9='9' WHERE KEY='ab' UPDATE CFnameSET 9='9', 8='8', 7='7', 6='6' WHERE KEY='ac' UPDATE CFname3 SET 5='5', 4='4', 3='3', 2='2' WHERE KEY='ad' UPDATE CFnameSET 1='1', 2='2', 3='3', 4='4' WHERE KEY='ae' APPLY BATCH
15. Courtney Robinson @zcourts Thank you for listening. Questions?Links: http://crlog.info/2011/03/29/cassandra-query-language-aka-cql-syntax/ https://svn.apache.org/viewvc/cassandra/trunk/doc/cql/CQL.html?view=co https://issues.apache.org/jira/browse/CASSANDRA-1703
Editor's Notes
http://wiki.apache.org/cassandra/API#StructuresWritesLevelBehaviorANYEnsure that the write has been written to at least 1 node, including HintedHandoff recipients.ONEEnsure that the write has been written to at least 1 replica's commit log and memory table before responding to the client.QUORUMEnsure that the write has been written to N / 2 + 1 replicas before responding to the client.LOCAL_QUORUMEnsure that the write has been written to <ReplicationFactor> / 2 + 1 nodes, within the local datacenter (requires NetworkTopologyStrategy)EACH_QUORUMEnsure that the write has been written to <ReplicationFactor> / 2 + 1 nodes in each datacenter (requires NetworkTopologyStrategy)ALLEnsure that the write is written to all N replicas before responding to the client. Any unresponsive replicas will fail the operation.ReadsLevelBehaviorANYNot supported. You probably want ONE instead.ONEWill return the record returned by the first replica to respond. A consistency check is always done in a background thread to fix any consistency issues whenConsistencyLevel.ONE is used. This means subsequent calls will have correct data even if the initial read gets an older value. (This is called ReadRepair)QUORUMWill query all replicas and return the record with the most recent timestamp once it has at least a majority of replicas (N / 2 + 1) reported. Again, the remaining replicas will be checked in the background.LOCAL_QUORUMReturns the record with the most recent timestamp once a majority of replicas within the local datacenter have replied.EACH_QUORUMReturns the record with the most recent timestamp once a majority of replicas within each datacenter have replied.ALLWill query all replicas and return the record with the most recent timestamp once all replicas have replied. Any unresponsive replicas will fail the operation.
The CREATE KEYSPACE statement creates a new top-level namespace (aka “keyspace”). Valid names are any string constructed of alphanumeric characters and underscores, but must begin with a letter. Properties such as replication strategy and count are specified during creation using the following accepted keyword arguments:Keywordrequireddescriptionreplication_factor yes Numeric argument that specifies the number of replicas for this keyspace.strategy_class yes Class name to use for managing replica placement. Any of the shipped strategies can be used by specifying the class name relative to org.apache.cassandra.locator, others will need to be fully-qualified and located on the classpath.strategy_options no Some strategies require additional arguments which can be supplied by appending the option name to the strategy_options keyword, separated by a colon (:). For example, a strategy option of “DC1″ with a value of “1″ would be specified as strategy_options:DC1 = 1.Synopsis:CREATE COLUMNFAMILY [(name1 type, name2 type, ...)] [WITH keyword1 = arg1 [AND keyword2 = arg2 [AND ...]]]; CREATE COLUMNFAMILY statements create new column family namespaces under the current keyspace. Valid column family names are strings of alphanumeric characters and underscores, which begin with a letter.Specifying Column Type (optional)CREATE COLUMNFAMILY (name1 type, name2 type) ...; It is possible to assign columns a type during column family creation. Columns configured with a type are validated accordingly when a write occurs. Column types are specified as a parenthesized, comma-separated list of column term and type pairs. The list of recognized types are:Type descriptionBytes Arbitrary bytes (no validation)AsciiASCII character stringUtf8 UTF8 encoded stringTimeuuid Type 1 UUIDUuid Type 4 UUIDInt 4-byte integerLong 8-byte longNote: In addition to the recognized types listed above, it is also possible to supply a string containing the name of a class (a sub-class of AbstractType), either fully qualified, or relative to theorg.apache.cassandra.db.marshalpackage.Column Family Options (optional)CREATE COLUMNFAMILY ... WITH keyword1 = arg1 AND keyword2 = arg2; A number of optional keyword arguments can be supplied to control the configuration of a new column family.Keyword default descriptionComparator utf8 Determines sorting and validation of column names. Valid values are identical to the types listed in Specifying Column Type above.Comment none A free-form, human-readable comment.row_cache_size 0 Number of rows whose entire contents to cache in memory.key_cache_size 200000 Number of keys per SSTable whose locations are kept in memory in “mostly LRU” order.read_repair_chance 1.0 The probability with which read repairs should be invoked on non-quorum reads.gc_grace_seconds 864000 Time to wait before garbage collecting tombstones (deletion markers).default_validation utf8 Determines validation of column values. Valid values are identical to the types listed in Specifying Column Type above.min_compaction_threshold 4 Minimum number of SSTables needed to start a minor compaction.max_compaction_threshold 32 Maximum number of SSTables allowed before a minor compaction is forced.row_cache_save_period_in_seconds 0 Number of seconds between saving row caches.key_cache_save_period_in_seconds 14400 Number of seconds between saving key caches.memtable_flush_after_mins 60 Maximum time to leave a dirty table unflushed.memtable_throughput_in_mb dynamic Maximum size of the memtable before it is flushed.memtable_operations_in_millions dynamic Number of operations in millions before the memtable is flushed.replicate_on_write false
Synopsis:USE <KEYSPACE>; A USE statement consists of the USE keyword, followed by a valid keyspace name. Its purpose is to assign the per-connection, current working keyspace. All subsequent keyspace-specific actions will be performed in the context of the supplied value.Synopsis:DROP <KEYSPACE|COLUMNFAMILY> namespace; DROP statements result in the immediate, irreversible removal of keyspace and column family namespaces.
Synopsis:UPDATE [USING CONSISTENCY ] SET name1 = value1, name2 = value2 WHERE KEY = keyname; An UPDATE is used to write one or more columns to a record in a Cassandra column family. No results are returned.Column FamilyUPDATE <COLUMN FAMILY> ... Statements begin with the UPDATE keyword followed by a Cassandra column family name.Consistency LevelUPDATE ... [USING <CONSISTENCY>] ... Following the column family identifier is an optional consistency level specification.Specifying Columns and RowUPDATE ... SET name1 = value1, name2 = value2 WHERE KEY = keyname; Rows are created or updated by supplying column names and values in term assignment format. Multiple columns can be set by separating the name/value pairs using commas. Each update statement requires exactly one key to be specified using a WHERE clause and the KEY keyword.Additionally, it is also possible to send multiple UPDATES to a node at once using a batch syntax:BEGIN BATCH [USING ] UPDATE CF1 SET name1 = value1, name2 = value2 WHERE KEY = keyname1; UPDATE CF1 SET name3 = value3 WHERE KEY = keyname2; UPDATE CF2 SET name4 = value4, name5 = value5 WHERE KEY = keyname3; APPLY BATCH When batching UPDATEs, a single consistency level is used for the entire batch, it appears after the BEGIN BATCHstatement, and uses the standard consistency level specification. Batch UPDATEs default toCONSISTENCY.ONEwhen left unspecified.NOTE: While there are no isolation guarantees, UPDATE queries are atomic within a give record.
Synopsis:DELETE [COLUMNS] FROM [USING ] WHERE KEY = keyname1 DELETE [COLUMNS] FROM [USING ] WHERE KEY IN (keyname1, keyname2); A DELETE is used to perform the removal of one or more columns from one or more rows.Specifying ColumnsDELETE [COLUMNS] ... Following the DELETE keyword is an optional comma-delimited list of column name terms. When no column names are specified, the remove applies to the entire row(s) matched by the WHERE clauseColumn FamilyDELETE ... FROM <COLUMN FAMILY> ... The column family name follows the list of column names.Consistency LevelUPDATE ... [USING <CONSISTENCY>] ... Following the column family identifier is an optional consistency level specification.Specifying RowsUPDATE ... WHERE KEY = keyname1 UPDATE ... WHERE KEY IN (keyname1, keyname2) The WHERE clause is used to determine which row(s) a DELETE applies to. The first form allows the specification of a single keyname using the KEY keyword and the = operator. The second form allows a list of keyname terms to be specified using the IN notation and a parenthesized list of comma-delimited keyname terms.
Synopsis:SELECT [FIRST N] [REVERSED] <SELECT EXPR> FROM <COLUMN FAMILY> [USING <CONSISTENCY>] [WHERE <CLAUSE>] [LIMIT N]; A SELECT is used to read one or more records from a Cassandra column family. It returns a result-set of rows, where each row consists of a key and a collection of columns corresponding to the query.Specifying ColumnsSELECT [FIRST N] [REVERSED] name1, name2, name3 FROM ... SELECT [FIRST N] [REVERSED] name1..nameN FROM ... The SELECT expression determines which columns will appear in the results and takes the form of either a comma separated list of names, or a range. The range notation consists of a start and end column name separated by two periods (..). The set of columns returned for a range is start and end inclusive.The FIRST option accepts an integer argument and can be used to apply a limit to the number of columns returned per row. When this limit is left unset it defaults to 10,000 columns.The REVERSED option causes the sort order of the results to be reversed.It is worth noting that unlike the projection in a SQL SELECT, there is no guarantee that the results will contain all of the columns specified. This is because Cassandra is schema-less and there are no guarantees that a given column exists.Column FamilySELECT ... FROM <COLUMN FAMILY> ... The FROM clause is used to specify the Cassandra column family applicable to a SELECT query.Consistency LevelSELECT ... [USING <CONSISTENCY>] ... Following the column family clause is an optional consistency level specification.Filtering rowsSELECT ... WHERE KEY = keyname AND name1 = value1 SELECT ... WHERE KEY >= startkey and KEY =< endkey AND name1 = value1 The WHERE clause provides for filtering the rows that appear in results. The clause can filter on a key name, or range of keys, and in the case of indexed columns, on column values. Key filters are specified using the KEYkeyword, a relational operator, (one of =, >, >=, <, and <=), and a term value. When terms appear on both sides of a relational operator it is assumed the filter applies to an indexed column. With column index filters, the term on the left of the operator is the name, the term on the right is the value to filter on.Note: The greater-than and less-than operators (> and <) result in key ranges that are inclusive of the terms. There is no supported notion of “strictly” greater-than or less-than; these operators are merely supported as aliases to >=and <=.LimitsSELECT ... WHERE <CLAUSE> [LIMIT N] ... Limiting the number of rows returned can be achieved by adding the LIMIT option to a SELECT expression. LIMITdefaults to 10,000 when left unset.