This document provides instructions for using Cassandra with Docker and examples of Cassandra queries for creating and interacting with keyspaces, tables, rows, columns and different data types including sets, lists, and maps. It demonstrates how to create and query tables with a single primary key or composite primary keys, add and modify columns, insert, update, select and delete data. The document concludes with an activity to design and implement an enrollment example using Cassandra.
Pour accéder aux fichiers nécessaires pour faire ce TP, visitez: https://drive.google.com/folderview?id=0Bz7DokLRQvx7M2JWZEt1VHdwSE0&usp=sharing
Pour plus de contenu, Visitez http://liliasfaxi.wix.com/liliasfaxi !
Pour accéder aux fichiers nécessaires pour faire ce TP, visitez: https://drive.google.com/folderview?id=0Bz7DokLRQvx7M2JWZEt1VHdwSE0&usp=sharing
Pour plus de contenu, Visitez http://liliasfaxi.wix.com/liliasfaxi !
Architecture web aujourd'hui, besoin de scalabilité des bases de données relationnelles, découverte des bases de données NoSQL et des différents types de celles-ci. La vidéo de présentation peut être consultée à l'adresse suivante : http://youtu.be/oIpjcqHyx2M
Pour accéder aux fichiers nécessaires pour faire ce TP, visitez: https://drive.google.com/folderview?id=0Bz7DokLRQvx7M2JWZEt1VHdwSE0&usp=sharing
Pour plus de contenu, Visitez http://liliasfaxi.wix.com/liliasfaxi !
Pour accéder aux fichiers nécessaires pour faire ce TP, visitez: https://drive.google.com/folderview?id=0Bz7DokLRQvx7M2JWZEt1VHdwSE0&usp=sharing
Pour plus de contenu, Visitez http://liliasfaxi.wix.com/liliasfaxi !
Architecture web aujourd'hui, besoin de scalabilité des bases de données relationnelles, découverte des bases de données NoSQL et des différents types de celles-ci. La vidéo de présentation peut être consultée à l'adresse suivante : http://youtu.be/oIpjcqHyx2M
Big Data : Hadoop
- Généralité
- Architecture HDFS
- Algorithme MapRduce
- Architecture YARN
- Hadoop v3.x vs Hadoopv2.x
Cours Big Data - Chap2 - GI3 - ENIS
BigData_TP2: Design Patterns dans HadoopLilia Sfaxi
Pour accéder aux fichiers nécessaires pour faire ce TP, visitez: https://drive.google.com/folderview?id=0Bz7DokLRQvx7M2JWZEt1VHdwSE0&usp=sharing
Pour plus de contenu, Visitez http://liliasfaxi.wix.com/liliasfaxi !
Hadoop 3.0 has been years in the making, and now it's finally arriving. Andrew Wang and Daniel Templeton offer an overview of new features, including HDFS erasure coding, YARN Timeline Service v2, YARN federation, and much more, and discuss current release management status and community testing efforts dedicated to making Hadoop 3.0 the best Hadoop major release yet.
Cet exposé donnera un aperçu du paysage NoSQL et une classification pour les différentes catégories architecturales, clarifiera les concepts de base et la terminologie, et fournira une comparaison des caractéristiques, des forces et des inconvénients des projets les plus populaires (CouchDB, MongoDB , Riak, Redis, Membase, Neo4j, Cassandra, HBase, Hypertable).
Les bases de données NoSQL bénéficient d'une large couverture médiatique, mais il semble y avoir beaucoup de confusion autour de celles-ci, comme dans quelles situations elles fonctionnent mieux qu'une base de données relationnelle, et comment choisir l'une plutôt qu'une autre.
Présentation général des étapes du processus ETL (Extract,Transform, Load) d'un projet décisionnel.
ETL, acronyme de Extraction, Transformation, Loading, est un système de chargement de données depuis les différentes sources d'information de l'entreprise (hétérogènes) jusqu'à l'entrepôt de données (modèles multidimensionnels).
CNES - CCT SIL - Traitement et Manipulation de la donnée à l‘aide des technologies Big Data
Présentation du 30 Juin 2017
Les CCT sont des espaces d'échanges techniques mis en place par le CNES il y a 20 ans, dans le but de réunir différents acteurs industriels et public pour s'enrichir mutuellement.
Alphorm.com Formation Big Data & Hadoop : Le Guide CompletAlphorm
Une bonne analyse et synthèse sur le domaine Big Data
Beaucoup de pratique sur Hadoop avec différentes méthodes : HDFS, Map Reduce, YARN, Spark, Hive, NoSQL, HDFS, HBase, zookeeper, MESOS et plusieurs autres outils
Les points forts de la formation
Une analyse globale sur le domaine Big Data et sur les outils liés à Hadoop avec beaucoup de pratique
Contenu de la formation
• Une bonne analyse de chaque concept illustré par de la pratique
• Les outils à installer sont à disposition
• Toutes les ressources sont à disposition
Résultats attendus
• Découvrir les formidables possibilités du Big Data avec Hadoop
• Comprendre parfaitement le périmètre de chaque outil de l'écosystème Hadoop
• Etre au fait de pouvoir utiliser Azure, AWS, et Google pour faire de l'Hadoop dans le Cloud
• Utiliser directement Hadoop sur de très nombreux exemples
• Découvrir le périmètre des bases NoSQL
• Comprendre comment faire du traitement temps réel avec Spark
Base de données graphe, Noe4j concepts et mise en oeuvreMICHRAFY MUSTAFA
Cette étude vise à présenter les concepts clés d’une base de données orientée graphe. La modélisation et la mise en œuvre des cas d’utilisation seront réalisées avec la base de données NEO4J, version 3.1.
SQL Analytics Powering Telemetry Analysis at ComcastDatabricks
Comcast is one of the leading providers of communications, entertainment, and cable products and services. At the heart of it is Comcast RDK providing the backbone of telemetry to the industry. RDK (Reference Design Kit) is pre-bundled opensource firmware for a complete home platform covering video, broadband and IoT devices. RDK team at Comcast analyzes petabytes of data, collected every 15 minutes from 70 million devices (video and broadband and IoT devices) installed in customer homes. They run ETL and aggregation pipelines and publish analytical dashboards on a daily basis to reduce customer calls and firmware rollout. The analysis is also used to calculate WIFI happiness index which is a critical KPI for Comcast customer experience.
In addition to this, RDK team also does release tracking by analyzing the RDK firmware quality. SQL Analytics allows customers to operate a lakehouse architecture that provides data warehousing performance at data lake economics for up to 4x better price/performance for SQL workloads than traditional cloud data warehouses.
We present the results of the “Test and Learn” with SQL Analytics and the delta engine that we worked in partnership with the Databricks team. We present a quick demo introducing the SQL native interface, the challenges we faced with migration, The results of the execution and our journey of productionizing this at scale.
> De l'échantillonnage à « N = Tout »
> Conséquences sur les données
> Créer une application dans Java
> Principales fonctionnalités de MongoDB
> Établir des liens
Big Data : Hadoop
- Généralité
- Architecture HDFS
- Algorithme MapRduce
- Architecture YARN
- Hadoop v3.x vs Hadoopv2.x
Cours Big Data - Chap2 - GI3 - ENIS
BigData_TP2: Design Patterns dans HadoopLilia Sfaxi
Pour accéder aux fichiers nécessaires pour faire ce TP, visitez: https://drive.google.com/folderview?id=0Bz7DokLRQvx7M2JWZEt1VHdwSE0&usp=sharing
Pour plus de contenu, Visitez http://liliasfaxi.wix.com/liliasfaxi !
Hadoop 3.0 has been years in the making, and now it's finally arriving. Andrew Wang and Daniel Templeton offer an overview of new features, including HDFS erasure coding, YARN Timeline Service v2, YARN federation, and much more, and discuss current release management status and community testing efforts dedicated to making Hadoop 3.0 the best Hadoop major release yet.
Cet exposé donnera un aperçu du paysage NoSQL et une classification pour les différentes catégories architecturales, clarifiera les concepts de base et la terminologie, et fournira une comparaison des caractéristiques, des forces et des inconvénients des projets les plus populaires (CouchDB, MongoDB , Riak, Redis, Membase, Neo4j, Cassandra, HBase, Hypertable).
Les bases de données NoSQL bénéficient d'une large couverture médiatique, mais il semble y avoir beaucoup de confusion autour de celles-ci, comme dans quelles situations elles fonctionnent mieux qu'une base de données relationnelle, et comment choisir l'une plutôt qu'une autre.
Présentation général des étapes du processus ETL (Extract,Transform, Load) d'un projet décisionnel.
ETL, acronyme de Extraction, Transformation, Loading, est un système de chargement de données depuis les différentes sources d'information de l'entreprise (hétérogènes) jusqu'à l'entrepôt de données (modèles multidimensionnels).
CNES - CCT SIL - Traitement et Manipulation de la donnée à l‘aide des technologies Big Data
Présentation du 30 Juin 2017
Les CCT sont des espaces d'échanges techniques mis en place par le CNES il y a 20 ans, dans le but de réunir différents acteurs industriels et public pour s'enrichir mutuellement.
Alphorm.com Formation Big Data & Hadoop : Le Guide CompletAlphorm
Une bonne analyse et synthèse sur le domaine Big Data
Beaucoup de pratique sur Hadoop avec différentes méthodes : HDFS, Map Reduce, YARN, Spark, Hive, NoSQL, HDFS, HBase, zookeeper, MESOS et plusieurs autres outils
Les points forts de la formation
Une analyse globale sur le domaine Big Data et sur les outils liés à Hadoop avec beaucoup de pratique
Contenu de la formation
• Une bonne analyse de chaque concept illustré par de la pratique
• Les outils à installer sont à disposition
• Toutes les ressources sont à disposition
Résultats attendus
• Découvrir les formidables possibilités du Big Data avec Hadoop
• Comprendre parfaitement le périmètre de chaque outil de l'écosystème Hadoop
• Etre au fait de pouvoir utiliser Azure, AWS, et Google pour faire de l'Hadoop dans le Cloud
• Utiliser directement Hadoop sur de très nombreux exemples
• Découvrir le périmètre des bases NoSQL
• Comprendre comment faire du traitement temps réel avec Spark
Base de données graphe, Noe4j concepts et mise en oeuvreMICHRAFY MUSTAFA
Cette étude vise à présenter les concepts clés d’une base de données orientée graphe. La modélisation et la mise en œuvre des cas d’utilisation seront réalisées avec la base de données NEO4J, version 3.1.
SQL Analytics Powering Telemetry Analysis at ComcastDatabricks
Comcast is one of the leading providers of communications, entertainment, and cable products and services. At the heart of it is Comcast RDK providing the backbone of telemetry to the industry. RDK (Reference Design Kit) is pre-bundled opensource firmware for a complete home platform covering video, broadband and IoT devices. RDK team at Comcast analyzes petabytes of data, collected every 15 minutes from 70 million devices (video and broadband and IoT devices) installed in customer homes. They run ETL and aggregation pipelines and publish analytical dashboards on a daily basis to reduce customer calls and firmware rollout. The analysis is also used to calculate WIFI happiness index which is a critical KPI for Comcast customer experience.
In addition to this, RDK team also does release tracking by analyzing the RDK firmware quality. SQL Analytics allows customers to operate a lakehouse architecture that provides data warehousing performance at data lake economics for up to 4x better price/performance for SQL workloads than traditional cloud data warehouses.
We present the results of the “Test and Learn” with SQL Analytics and the delta engine that we worked in partnership with the Databricks team. We present a quick demo introducing the SQL native interface, the challenges we faced with migration, The results of the execution and our journey of productionizing this at scale.
> De l'échantillonnage à « N = Tout »
> Conséquences sur les données
> Créer une application dans Java
> Principales fonctionnalités de MongoDB
> Établir des liens
Deze presentatie is gegeven tijdens de KScope conferentie 2012
Spreker: Luc Bors
Titel: How to Bring Common UI Patterns to ADF
Onderwerp: Fusion Middleware - Subonderwerp: ADF
Eindgebruikers van bedrijfsapplicaties eisen dezelfde gebruikerservaring die ze kennen van bijvoorbeeld office applicaties en applicaties op het internet. Functies zoals bookmarking, favorieten en het werken met tabs wordt graag gezien in de dagelijkse werk. Het zoekmechanisme van Google, dat suggesties toont op basis van de ingevoerde tekst, is zo ´gewoon´ dat mensen dit in elke applicatie terug willen zien. Twitter en Facebook geven automatisch aan dat je nieuwe berichten hebt zonder dat je daar zelf eerst om moet vragen, dat gebruikers de normaalste zaak van de wereld vinden. Er zijn nog veel meer van deze UI patterns. In deze sessie leer je hoe een aantal van deze UI patterns in je ADF applicatie kunt inbouwen waardoor de eindgebruiker beschikking krijgt over bekende en vanzelfsprekende features. Dit zal leiden tot een snellere acceptatie van de applicatie en prettigere gebruikerservaring.
Modify this code to change the underlying data structure to .pdfadityaenterprise32
Modify this code to change the underlying data structure to a double ended doubly-linked list. Data
records should be added to the end of the main array (the database) and three double ended
doubly linked lists should be maintained ; one each for the ID, LastName and FirstName. The
linked lists should (of course) be maintained in order You MUST use the driver program I have
provided , and make some additions. Remember, you should NOT be permitted to add a record
with a duplicate index number. You MUST implement all methods as I have indicated. Printing out
the data in forward order should use the "next" link in each node, printing out in reverse order
should employ the "prev" link.
import java.io.File;
import java.io.FileNotFoundException;
import java.util.Scanner;
public class DataBase {
private DataBaseArray dbArr;
private IndexArray firstIA, lastIA, IDIA;
private DeleteStack deleteStack;
private Scanner scan;
private String nld = "n-----------------------------n";
private String dnl = "-----------------------------n";
//Default Constructor
public DataBase() {
this.dbArr = new DataBaseArray(100);
this.firstIA = new IndexArray(100, false);
this.lastIA = new IndexArray(100, false);
this.IDIA = new IndexArray(100, true);
this.deleteStack = new DeleteStack(100);
readInDataFromFile();
scan = new Scanner(System.in);
}
//Constructor for custom size
public DataBase(int maxSize) {
this.dbArr = new DataBaseArray(maxSize);
this.firstIA = new IndexArray(maxSize, false);
this.lastIA = new IndexArray(maxSize, false);
this.IDIA = new IndexArray(maxSize, true);
this.deleteStack = new DeleteStack(maxSize);
readInDataFromFile();
scan = new Scanner(System.in);
}
//Read in the initial records
public void readInDataFromFile() {
File dbData = new File("dbData.txt");
Scanner scan = new Scanner(System.in); //Have to initialize to something for compilation
try {
scan = new Scanner(dbData);
} catch (FileNotFoundException e) {
e.printStackTrace();
}
String ID = "";
String fname = "";
String lname = "";
String[] values;
while(scan.hasNextLine()) {
values = scan.nextLine().split(",");
lname = values[0];
fname = values[1];
ID = values[2];
this.insertIt(ID, fname, lname);
}
}
//Ask user to specify ID, delete that record
public void deleteIt() {
String ID = "";
System.out.println("nDELETING" + nld);
System.out.println("Please enter the ID of the record to be deleted: ");
ID = this.scan.nextLine();
if (this.IDIA.searchByKey(ID) == -1) {
System.out.println(dnl + "Record not found, please try again." + nld);
return;
}
this.delete(ID);
System.out.println(dnl + "Record successfully deleted" + nld);
}
//Ask user to specify ID, return data of that record
public void findIt() {
String ID = "";
System.out.println("nFINDING" + nld);
System.out.println("Please enter the ID of the record to be found: ");
ID = this.scan.nextLine();
int iaIndexOfRecord = this.IDIA.searchByKey(ID);
if (iaIndexOfRecord == -1) {
System.out.println(dnl + "Record not found, please try again." + nld);
return.
could you implement this function please, im having issues with it..pdfferoz544
could you implement this function please, im having issues with it.
void makeList (const ListNode::value_type [],const size_t& count)
class ListNode
{
public:
typedef int value_type;
ListNode (value_type d = value_type(), ListNode* n = NULL) { datum = d; next = n; }
//Assessor
value_type getDatum () const { return datum; }
ListNode const* getNext () const { return next; }
//Mutator
void setDatum (const value_type& d) {datum = d; }
ListNode* getNext () { return next; }
void setNext (ListNode* new_link) {next = new_link; }
private:
value_type datum;
ListNode* next;
};
class LinkedList
{
public:
LinkedList ();
virtual ~LinkedList ();
void insertItem (ListNode::value_type);
void makeList (const ListNode::value_type [],const size_t& count);
void deleteList ();
//The following friend function is implemented in lablinklist.cpp
friend std::ostream& operator<<(std::ostream&, const LinkedList&);
private:
ListNode* head;
};
This is the pseudocode, but i still have a hard time undertanding it.
Creating a List (makeList(const ListNode::value_type [],const size_t& count)) This function
receives an array in the order that we want to add it to the linkedlist. Index 0 will be the head
node, index n will be the last node.
First, create a node initialized with a data value and a NULL pointer. Set the \"head\" to point to
the first node. Set up a current-pointer to the first node (or \"head\"). Get a data value for the next
node. While more nodes to add { Create a new node initialized with the data value and a NULL
pointer. Set the current-pointer link member (\"next\") to the new node. Set the current-pointer to
point to the new node. Get a data value for the next node. }
Thanks.
Solution
//linkedList.h
#ifndef LINKEDLIST_H
#define LINKEDLIST_H
#include
#include
class ListNode
{
public:
typedef int value_type;
ListNode(value_type d = value_type(), ListNode* n = NULL) { datum = d; next = n; }
//Assessor
value_type getDatum() const { return datum; }
ListNode const* getNext() const { return next; }
//Mutator
void setDatum(const value_type& d) { datum = d; }
ListNode* getNext() { return next; }
void setNext(ListNode* new_link) { next = new_link; }
private:
value_type datum;
ListNode* next;
};
class LinkedList
{
public:
LinkedList();
virtual ~LinkedList();
void insertItem(ListNode::value_type);
void makeList(const ListNode::value_type[], const size_t& count);
void deleteList();
//The following friend function is implemented in lablinklist.cpp
friend std::ostream& operator<<(std::ostream&, const LinkedList&);
private:
ListNode* head;
};
#endif
-------------------------------------------------------------
//linkedList.cpp
#include \"linkedlist.h\"
LinkedList::LinkedList()
{
head = NULL;
}
LinkedList::~LinkedList()
{
ListNode *tmp = head;
while (tmp != NULL)
{
head = head->getNext();
delete tmp;
tmp = head;
}
}
void LinkedList::insertItem(ListNode::value_type item)
{
ListNode *newNode,*cur = head;
newNode = new ListNode;
newNode->setDatum(item);
newNode->setNext(NULL);.
When working with enterprise applications, you want to have the same user experience that you know from for instance office applications and browsers. People know how to use the features that can be found in browsers such as bookmarking, favorites, and working with tabs. The search mechanism provided by Google, that uses suggestions based on the text typed by the user, is so common that people expect this in every application. And there are more of these UI patterns. In this session, you will learn how to implement some of the common UI patterns in your ADF application.
"With Flink and Kubernetes, it's possible to deploy stream processing jobs with just SQL and YAML. This low-code approach can certainly save a lot of development time. However, there is more to data pipelines than just streaming SQL. We must wire up many different systems, thread through schemas, and, worst-of-all, write a lot of configuration.
In this talk, we'll explore just how ""declarative"" we can make streaming data pipelines on Kubernetes. I'll show how we can go deeper by adding more and more operators to the stack. How deep can we go?"
Cassandra Community Webinar | Become a Super ModelerDataStax
Sure you can do some time series modeling. Maybe some user profiles. What's going to make you a super modeler? Let's take a look at some great techniques taken from real world applications where we exploit the Cassandra big table model to it's fullest advantage. We'll cover some of the new features in CQL 3 as well as some tried and true methods. In particular, we will look at fast indexing techniques to get data faster at scale. You'll be jet setting through your data like a true super modeler in no time.
Speaker: Patrick McFadin, Principal Solutions Architect at DataStax
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
JMeter webinar - integration with InfluxDB and GrafanaRTTS
Watch this recorded webinar about real-time monitoring of application performance. See how to integrate Apache JMeter, the open-source leader in performance testing, with InfluxDB, the open-source time-series database, and Grafana, the open-source analytics and visualization application.
In this webinar, we will review the benefits of leveraging InfluxDB and Grafana when executing load tests and demonstrate how these tools are used to visualize performance metrics.
Length: 30 minutes
Session Overview
-------------------------------------------
During this webinar, we will cover the following topics while demonstrating the integrations of JMeter, InfluxDB and Grafana:
- What out-of-the-box solutions are available for real-time monitoring JMeter tests?
- What are the benefits of integrating InfluxDB and Grafana into the load testing stack?
- Which features are provided by Grafana?
- Demonstration of InfluxDB and Grafana using a practice web application
To view the webinar recording, go to:
https://www.rttsweb.com/jmeter-integration-webinar
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Let's dive deeper into the world of ODC! Ricardo Alves (OutSystems) will join us to tell all about the new Data Fabric. After that, Sezen de Bruijn (OutSystems) will get into the details on how to best design a sturdy architecture within ODC.
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Ramesh Iyer
In today's fast-changing business world, Companies that adapt and embrace new ideas often need help to keep up with the competition. However, fostering a culture of innovation takes much work. It takes vision, leadership and willingness to take risks in the right proportion. Sachin Dev Duggal, co-founder of Builder.ai, has perfected the art of this balance, creating a company culture where creativity and growth are nurtured at each stage.
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Tobias Schneck
As AI technology is pushing into IT I was wondering myself, as an “infrastructure container kubernetes guy”, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefit’s both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
Search and Society: Reimagining Information Access for Radical FuturesBhaskar Mitra
The field of Information retrieval (IR) is currently undergoing a transformative shift, at least partly due to the emerging applications of generative AI to information access. In this talk, we will deliberate on the sociotechnical implications of generative AI for information access. We will argue that there is both a critical necessity and an exciting opportunity for the IR community to re-center our research agendas on societal needs while dismantling the artificial separation between the work on fairness, accountability, transparency, and ethics in IR and the rest of IR research. Instead of adopting a reactionary strategy of trying to mitigate potential social harms from emerging technologies, the community should aim to proactively set the research agenda for the kinds of systems we should build inspired by diverse explicitly stated sociotechnical imaginaries. The sociotechnical imaginaries that underpin the design and development of information access technologies needs to be explicitly articulated, and we need to develop theories of change in context of these diverse perspectives. Our guiding future imaginaries must be informed by other academic fields, such as democratic theory and critical theory, and should be co-developed with social science scholars, legal scholars, civil rights and social justice activists, and artists, among others.
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
UiPath Test Automation using UiPath Test Suite series, part 4
Lab1-DB-Cassandra
1. CS331 - Database Management Systems
Dr. Lilia Sfaxi
6 novembre 2019
Cassandra
To get a docker container with cassandra:
docker pull cassandra
docker run --name medtech-cassandra -d cassandra:latest
docker exec -it medtech-cassandra cqlsh
create a keyspace:
CREATE KEYSPACE mykeyspace WITH REPLICATION = { 'class' : 'SimpleStrategy',
'replication_factor' : 1};
see all keyspaces:
describe keyspaces;
use the keyspace:
use mykeyspace ;
create table:
create table users( user_id int primary key, fname text, lname text);
show all tables in the keyspace:
describe tables;
insert values:
INSERT INTO users (user_id, fname, lname) VALUES (1745, 'john', 'smith');
see inserted values:
select * from users;
CASSANDRA DR. LILIA SFAXI 1
MedTech
2. read requests (see the results)
select * from users where user_id < 2000;
select * from users where user_id = 1745;
select * from users where lname = 'smith';
indexing columns:
CREATE INDEX ON users (lname);
SELECT * FROM users WHERE lname = 'smith';
Create a table with composite key :
CREATE TABLE tab2 (
id1 int,
id2 int,
fi
rst_name varchar,
last_name varchar,
PRIMARY KEY (id1, id2));
Add a new column to the table:
ALTER TABLE users ADD telephone text;
Alter the value of a row:
UPDATE users SET telephone = '21212121' where user_id = 1745;
Empty a table:
TRUNCATE users;
Delete a table:
DROP table users;
Create a table with sets:
CREATE TABLE users (user_id int PRIMARY KEY, fname text, lname text,emails set<text>);
Insert values in a set:
CASSANDRA DR. LILIA SFAXI 2
3. INSERT INTO users (user_id, fname, lname, emails)VALUES(1234, 'Frodo', 'Baggins',
{'f@baggins.com', 'baggins@gmail.com'});
Add an element to a set:
UPDATE users SET emails = emails + {'fb@friendsofmordor.org'} WHERE user_id = 1234;
Extract email addresses:
SELECT user_id, emails FROM users WHERE user_id = 1234;
Delete an element from the set:
UPDATE users SET emails = emails - {'fb@friendsofmordor.org'} WHERE user_id = 1234;
To delete all elements:
UPDATE users SET emails = {} WHERE user_id = 1234;
or
DELETE emails FROM users WHERE user_id = 1234;
create a list:
ALTER TABLE users ADD top_places list<text>;
insert elements in the list:
UPDATE users SET top_places = [ 'rivendell', 'rohan' ] WHERE user_id = 1234;
to add a new element in the list:
UPDATE users SET top_places = top_places + [ 'mordor' ] WHERE user_id = 1234;
to insert an element at a given position (old value will be replaced):
UPDATE users SET top_places[1] = 'riddermark' WHERE user_id = 1234;
delete an element from the list, using its index:
DELETE top_places[2] FROM users WHERE user_id = 1234;
delete all elements with a given value:
UPDATE users SET top_places = top_places - ['riddermark'] WHERE user_id = 1234;
CASSANDRA DR. LILIA SFAXI 3
4. create a map:
ALTER TABLE users ADD todo map<timestamp, text> ;
insert an element in the map:
UPDATE users SET todo = { '2012-9-24' : 'enter mordor', '2012-10-2 12:00' : 'throw ring into
mount doom', toTimestamp(now()): 'die!'} WHERE user_id = 1234;
insert a speci
fi
c element by id:
UPDATE users SET todo['2012-10-2 12:00'] = 'throw my precious into mount doom' WHERE
user_id = 1234;
replace the whole map with insert:
INSERT INTO users (user_id,todo) VALUES ( 1234, { '2013-9-22 12:01' : 'birthday wishes to
Bilbo', '2013-10-1 18:00' : 'Check into Inn of Prancing Pony' });
delete an element in the map:
DELETE todo['2012-9-24'] FROM users WHERE user_id = 1234;
Activity
- Design the enrollment example for a column-based database like cassandra
- Create the tables and run some queries to test
CASSANDRA DR. LILIA SFAXI 4