This tutorial will give you a quick start to SQL including examples. It covers most of the topics required for a basic understanding of SQL and to get a feel of how it works.
This is a word file for SQL COMMANDS and including some basic information regarding SQL. I hope it will help you a lot while doing SQL and its functions and commands.
This tutorial will give you a quick start to SQL including examples. It covers most of the topics required for a basic understanding of SQL and to get a feel of how it works.
This is a word file for SQL COMMANDS and including some basic information regarding SQL. I hope it will help you a lot while doing SQL and its functions and commands.
A. Table Basic Data Types- Char, varchar/varchar2, long, number, Fixed Commands to create table Commands for table handling- Alter table, Drop table, Insert records B. Commands for record handling Update, Delete Select with operators like arithmetic, comparison, logical Query Expression operators Ordering the records with orderby Grouping the records C. SQL functions Date, Numeric, Character, conversion Group functions avg, max, min, sum, count Set operations- Union, Union all, intersect, minu
An tutorial for sql learners in very easy way. It contains all the sql commands like ddl, dml, etc. with suitable examples.
at the end there are 3 sets of question with their solution with explanation. each set contains 40+ questions.
A. Table Basic Data Types- Char, varchar/varchar2, long, number, Fixed Commands to create table Commands for table handling- Alter table, Drop table, Insert records B. Commands for record handling Update, Delete Select with operators like arithmetic, comparison, logical Query Expression operators Ordering the records with orderby Grouping the records C. SQL functions Date, Numeric, Character, conversion Group functions avg, max, min, sum, count Set operations- Union, Union all, intersect, minu
An tutorial for sql learners in very easy way. It contains all the sql commands like ddl, dml, etc. with suitable examples.
at the end there are 3 sets of question with their solution with explanation. each set contains 40+ questions.
(INNER) JOIN: , LEFT (OUTER) JOIN: ,RIGHT (OUTER) JOIN: , FULL (OUTER) JOIN: , SQL UNION Operator, SQL GROUP BY HAVING statement, The SQL EXISTS Operator, The SQL ANY and ALL Operators, The SQL SELECT INTO Statement, The SQL INSERT INTO SELECT StatementThe SQL INSERT INTO SELECT Statement
SQL stands for Structured Query Language.
SQL is a database management language for relational databases.
SQL lets you access and manipulate databases.
UiPath Test Automation using UiPath Test Suite series, part 5DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 5. In this session, we will cover CI/CD with devops.
Topics covered:
CI/CD with in UiPath
End-to-end overview of CI/CD pipeline with Azure devops
Speaker:
Lyndsey Byblow, Test Suite Sales Engineer @ UiPath, Inc.
In his public lecture, Christian Timmerer provides insights into the fascinating history of video streaming, starting from its humble beginnings before YouTube to the groundbreaking technologies that now dominate platforms like Netflix and ORF ON. Timmerer also presents provocative contributions of his own that have significantly influenced the industry. He concludes by looking at future challenges and invites the audience to join in a discussion.
Dr. Sean Tan, Head of Data Science, Changi Airport Group
Discover how Changi Airport Group (CAG) leverages graph technologies and generative AI to revolutionize their search capabilities. This session delves into the unique search needs of CAG’s diverse passengers and customers, showcasing how graph data structures enhance the accuracy and relevance of AI-generated search results, mitigating the risk of “hallucinations” and improving the overall customer journey.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
In the rapidly evolving landscape of technologies, XML continues to play a vital role in structuring, storing, and transporting data across diverse systems. The recent advancements in artificial intelligence (AI) present new methodologies for enhancing XML development workflows, introducing efficiency, automation, and intelligent capabilities. This presentation will outline the scope and perspective of utilizing AI in XML development. The potential benefits and the possible pitfalls will be highlighted, providing a balanced view of the subject.
We will explore the capabilities of AI in understanding XML markup languages and autonomously creating structured XML content. Additionally, we will examine the capacity of AI to enrich plain text with appropriate XML markup. Practical examples and methodological guidelines will be provided to elucidate how AI can be effectively prompted to interpret and generate accurate XML markup.
Further emphasis will be placed on the role of AI in developing XSLT, or schemas such as XSD and Schematron. We will address the techniques and strategies adopted to create prompts for generating code, explaining code, or refactoring the code, and the results achieved.
The discussion will extend to how AI can be used to transform XML content. In particular, the focus will be on the use of AI XPath extension functions in XSLT, Schematron, Schematron Quick Fixes, or for XML content refactoring.
The presentation aims to deliver a comprehensive overview of AI usage in XML development, providing attendees with the necessary knowledge to make informed decisions. Whether you’re at the early stages of adopting AI or considering integrating it in advanced XML development, this presentation will cover all levels of expertise.
By highlighting the potential advantages and challenges of integrating AI with XML development tools and languages, the presentation seeks to inspire thoughtful conversation around the future of XML development. We’ll not only delve into the technical aspects of AI-powered XML development but also discuss practical implications and possible future directions.
Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Essentials of Automations: The Art of Triggers and Actions in FMESafe Software
In this second installment of our Essentials of Automations webinar series, we’ll explore the landscape of triggers and actions, guiding you through the nuances of authoring and adapting workspaces for seamless automations. Gain an understanding of the full spectrum of triggers and actions available in FME, empowering you to enhance your workspaces for efficient automation.
We’ll kick things off by showcasing the most commonly used event-based triggers, introducing you to various automation workflows like manual triggers, schedules, directory watchers, and more. Plus, see how these elements play out in real scenarios.
Whether you’re tweaking your current setup or building from the ground up, this session will arm you with the tools and insights needed to transform your FME usage into a powerhouse of productivity. Join us to discover effective strategies that simplify complex processes, enhancing your productivity and transforming your data management practices with FME. Let’s turn complexity into clarity and make your workspaces work wonders!
Removing Uninteresting Bytes in Software FuzzingAftab Hussain
Imagine a world where software fuzzing, the process of mutating bytes in test seeds to uncover hidden and erroneous program behaviors, becomes faster and more effective. A lot depends on the initial seeds, which can significantly dictate the trajectory of a fuzzing campaign, particularly in terms of how long it takes to uncover interesting behaviour in your code. We introduce DIAR, a technique designed to speedup fuzzing campaigns by pinpointing and eliminating those uninteresting bytes in the seeds. Picture this: instead of wasting valuable resources on meaningless mutations in large, bloated seeds, DIAR removes the unnecessary bytes, streamlining the entire process.
In this work, we equipped AFL, a popular fuzzer, with DIAR and examined two critical Linux libraries -- Libxml's xmllint, a tool for parsing xml documents, and Binutil's readelf, an essential debugging and security analysis command-line tool used to display detailed information about ELF (Executable and Linkable Format). Our preliminary results show that AFL+DIAR does not only discover new paths more quickly but also achieves higher coverage overall. This work thus showcases how starting with lean and optimized seeds can lead to faster, more comprehensive fuzzing campaigns -- and DIAR helps you find such seeds.
- These are slides of the talk given at IEEE International Conference on Software Testing Verification and Validation Workshop, ICSTW 2022.
Communications Mining Series - Zero to Hero - Session 1DianaGray10
This session provides introduction to UiPath Communication Mining, importance and platform overview. You will acquire a good understand of the phases in Communication Mining as we go over the platform with you. Topics covered:
• Communication Mining Overview
• Why is it important?
• How can it help today’s business and the benefits
• Phases in Communication Mining
• Demo on Platform overview
• Q/A
Unlocking Productivity: Leveraging the Potential of Copilot in Microsoft 365, a presentation by Christoforos Vlachos, Senior Solutions Manager – Modern Workplace, Uni Systems
Goodbye Windows 11: Make Way for Nitrux Linux 3.5.0!SOFTTECHHUB
As the digital landscape continually evolves, operating systems play a critical role in shaping user experiences and productivity. The launch of Nitrux Linux 3.5.0 marks a significant milestone, offering a robust alternative to traditional systems such as Windows 11. This article delves into the essence of Nitrux Linux 3.5.0, exploring its unique features, advantages, and how it stands as a compelling choice for both casual users and tech enthusiasts.
Enchancing adoption of Open Source Libraries. A case study on Albumentations.AIVladimir Iglovikov, Ph.D.
Presented by Vladimir Iglovikov:
- https://www.linkedin.com/in/iglovikov/
- https://x.com/viglovikov
- https://www.instagram.com/ternaus/
This presentation delves into the journey of Albumentations.ai, a highly successful open-source library for data augmentation.
Created out of a necessity for superior performance in Kaggle competitions, Albumentations has grown to become a widely used tool among data scientists and machine learning practitioners.
This case study covers various aspects, including:
People: The contributors and community that have supported Albumentations.
Metrics: The success indicators such as downloads, daily active users, GitHub stars, and financial contributions.
Challenges: The hurdles in monetizing open-source projects and measuring user engagement.
Development Practices: Best practices for creating, maintaining, and scaling open-source libraries, including code hygiene, CI/CD, and fast iteration.
Community Building: Strategies for making adoption easy, iterating quickly, and fostering a vibrant, engaged community.
Marketing: Both online and offline marketing tactics, focusing on real, impactful interactions and collaborations.
Mental Health: Maintaining balance and not feeling pressured by user demands.
Key insights include the importance of automation, making the adoption process seamless, and leveraging offline interactions for marketing. The presentation also emphasizes the need for continuous small improvements and building a friendly, inclusive community that contributes to the project's growth.
Vladimir Iglovikov brings his extensive experience as a Kaggle Grandmaster, ex-Staff ML Engineer at Lyft, sharing valuable lessons and practical advice for anyone looking to enhance the adoption of their open-source projects.
Explore more about Albumentations and join the community at:
GitHub: https://github.com/albumentations-team/albumentations
Website: https://albumentations.ai/
LinkedIn: https://www.linkedin.com/company/100504475
Twitter: https://x.com/albumentations
GridMate - End to end testing is a critical piece to ensure quality and avoid...ThomasParaiso2
End to end testing is a critical piece to ensure quality and avoid regressions. In this session, we share our journey building an E2E testing pipeline for GridMate components (LWC and Aura) using Cypress, JSForce, FakerJS…
UiPath Test Automation using UiPath Test Suite series, part 6DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 6. In this session, we will cover Test Automation with generative AI and Open AI.
UiPath Test Automation with generative AI and Open AI webinar offers an in-depth exploration of leveraging cutting-edge technologies for test automation within the UiPath platform. Attendees will delve into the integration of generative AI, a test automation solution, with Open AI advanced natural language processing capabilities.
Throughout the session, participants will discover how this synergy empowers testers to automate repetitive tasks, enhance testing accuracy, and expedite the software testing life cycle. Topics covered include the seamless integration process, practical use cases, and the benefits of harnessing AI-driven automation for UiPath testing initiatives. By attending this webinar, testers, and automation professionals can gain valuable insights into harnessing the power of AI to optimize their test automation workflows within the UiPath ecosystem, ultimately driving efficiency and quality in software development processes.
What will you get from this session?
1. Insights into integrating generative AI.
2. Understanding how this integration enhances test automation within the UiPath platform
3. Practical demonstrations
4. Exploration of real-world use cases illustrating the benefits of AI-driven test automation for UiPath
Topics covered:
What is generative AI
Test Automation with generative AI and Open AI.
UiPath integration with generative AI
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
2. The Data Definition Language (DDL) part of SQL
permits database tables to be created or deleted.
We can also define indexes (keys), specify links
between tables, and impose constraints between
database tables.
CREATE TABLE - creates a new database table
ALTER TABLE - alters (changes) a database table
DROP TABLE - deletes a database
3. SQL (Structured Query Language) is a syntax
for executing queries. But the SQL language
also includes a syntax to update, insert, and
delete records.
SELECT - extracts data from a database
table
UPDATE - updates data in a database table
DELETE - deletes data from a database table
INSERT INTO - inserts new data into a
database table
4. To create a database:
SYNTAX
CREATE DATABASE database_name ;
EXAMPLE
CREATE DATABASE s2i;
To activate the created database:
SYNTAX
USE DATABASE_NAME;
6. Data Type Description
integer(size)
int(size)
smallint(size)
tinyint(size)
Hold integers only. The maximum
number of digits are specified in
parenthesis.
decimal(size,d)
numeric(size,d
)
Hold numbers with fractions. The
maximum number of digits are
specified in "size". The maximum
number of digits to the right of the
decimal is specified in "d".
char(size) Holds a fixed length string (can
contain letters, numbers, and special
characters). The fixed size is
specified in parenthesis.
varchar(size) Holds a variable length string (can
contain letters, numbers, and special
characters). The maximum size is
specified in parenthesis.
date(yyyymmd
d)
Holds a date
7. The INSERT INTO statement is used to insert
new rows into a table.
SYNTAX
INSERT INTO table_name VALUES (value1,
value2,....) ;
INSERT INTO table_name (column1,
column2,...)VALUES (value1, value2,....) ;
8. The UPDATE statement is used to modify the
data in a table.
SYNTAX
UPDATE table_name SET column1=
new_value WHERE
Old_column1=old_value;
9. The ALTER TABLE statement is used to add or
drop columns in an existing table.
SYNTAX
ALTER TABLE table_name ADD column_name
data_type;
10. The DELETE statement is used to delete rows
in a table.
Syntax
DELETE FROM table_name WHERE
column_name = some_value;
11. Delete All Rows
DELETE FROM table_name;
OR
DELETE * FROM table_name;
OR
TRUNCATE TABLE table_name;
12. To delete a table
DROP TABLE table_name;
To delete a database
DROP DATABASE database_name;
To delete the column in table
ALTER TABLE table_name DROP COLUMN
column_name;
13. The SELECT statement is used to select data
from a table. The tabular result is stored in a
result table (called the result-set).
Select a table fully
Syntax
SELECT * FROM Persons;
14. SELECT A COLUMN
Select column_name from table_name;
SELECT A ROW
Select*from table_name where
column_name=‘value’;
15. All values stored in mysql is in array formate.
$var1=mysql_query(“select*from table_name”,
$connection_name);
While($var=mysql_fetch_array($var1))
{
Echo “<br/>”;
Echo $var[‘column_name’];
}
16. The LIKE & UNLIKE condition is used to
specify a search for a pattern in a column.
Syntax
SELECT column FROM table_name WHERE
column_name LIKE pattern;
SELECT column FROM table_name WHERE
column_name UNLIKE pattern;
17. EXAMPLES
SELECT * FROM table_name WHERE column_name
LIKE 'O%‘;
SELECT * FROM table_name WHERE column_name
LIKE '%a‘;
SELECT * FROM table_name WHERE column_name
LIKE '%la%‘;
18. The AND & OR operators are used to filter
records based on more than one condition.
The AND operator displays a record if both
the first condition and the second condition
is true.
The OR operator displays a record if either
the first condition or the second condition is
true.
19. SELECT * FROM table_name WHERE
column1=‘value' AND column2=‘value‘;
SELECT * FROM table_name WHERE
column1=‘value1' OR column1=‘value2‘;
SELECT * FROM table_name WHERE
column1=‘value' AND (column2=‘value1' OR
column2=‘value2‘);
20. The ORDER BY keyword is used to sort the
result-set by a specified column.
The ORDER BY keyword sort the records in
ascending order by default.
If you want to sort the records in a descending
order, you can use the DESC keyword.
21. SELECT column_name(s) FROM table_name
ORDER BY column_name(s) ASC|DESC;
SELECT * FROM table_name ORDER BY
column_name;
SELECT * FROM table_name ORDER BY
column_name DESC;
22. Auto-increment allows a unique number to be
generated when a new record is inserted into a
table.
CREATE TABLE Persons ( Id int (5)
AUTO_INCREMENT)
ALTER TABLE Persons AUTO_INCREMENT=100 ;
23. The TOP clause is used to specify the number
of records to return.
The TOP clause can be very useful on large
tables with thousands of records. Returning a
large number of records can impact on
performance.
Note: Not all database systems support the
TOP clause.
24. SELECT column_name(s) FROM table_name LIMIT
number;
SELECT TOP 2 * FROM table_name;
SELECT TOP 50 PERCENT * FROM table_name;
25. With SQL, aliases can be used for column names
and table names.
SELECT column AS column_alias FROM table;
EXAMPLES: SELECT LastName AS Family, FirstName
AS NameFROM Persons;
SELECT column FROM table AS table_alias;
EXAMPLE: SELECT LastName, FROM Persons AS
Employees;
26. The IN operator allows you to specify multiple
values in a WHERE clause.
SELECT column_name(s) FROM table_name WHERE
column_name IN (value1,value2,...);
SELECT * FROM table_name WHERE column IN
(‘value1',‘value2');
27. The BETWEEN operator selects a range of data
between two values. The values can be numbers,
text, or dates.
SELECT column_name(s) FROM table_name WHERE
column_name BETWEEN value1 AND value2;
SELECT column_name(s) FROM table_name WHERE
column_name NOTBETWEEN value1 AND value2;
28. To select only DIFFERENT values from the
column named
SYNTAX
SELECT DISTINCT Company FROM Orders;
29. Constraints are used to limit the type of data
that can go into a table.
Constraints can be specified when a table is
created (with the CREATE TABLE statement) or
after the table is created (with the ALTER
TABLE statement).
NOT NULL
UNIQUE
CHECK
PRIMARY KEY
FOREIGN KEY
DEFAULT
30. The NOT NULL constraint enforces a column
to NOT accept NULL values.
CREATE TABLE Persons ( P_Id int NOT NULL,
LastName varchar(255) NOT NULL, FirstName
varchar(255), Address varchar(255), City
varchar(255) )
31. The UNIQUE constraint uniquely identifies each
record in a database table.
CREATE TABLE Persons ( P_Id int NOT NULL,
LastName varchar(255) NOT NULL, FirstName
varchar(255), Address varchar(255), City
varchar(255), UNIQUE (P_Id) );
Alter table table_name ADD UNIQUE (ID);
32. The PRIMARY KEY constraint uniquely
identifies each record in a database
table.
Primary keys must contain unique values.
A primary key column cannot contain
NULL values
CREATE TABLE Persons ( P_Id int NOT
NULL, LastName varchar(255) NOT NULL,
PRIMARY KEY (P_Id) )
ALTER TABLE Persons ADD PRIMARY KEY
(P_Id)
33. A FOREIGN KEY in one table points to a
PRIMARY KEY in another table.
CREATE TABLE Orders ( O_Id int NOT NULL,
OrderNo int NOT NULL, P_Id int, PRIMARY
KEY (P_Id), FOREIGN KEY (O_Id) REFERENCES
Persons(P_Id) )
ALTER TABLE Orders ADD FOREIGN KEY (O_Id)
REFERENCES Persons(P_Id)
34. The CHECK constraint is used to limit the value
range that can be placed in a column.
CREATE TABLE Persons ( P_Id int NOT NULL,
LastName varchar(255) NOT NULL, FirstName
varchar(255), Address varchar(255), City
varchar(255), CHECK (P_Id>0) )
ALTER TABLE Persons ADD CHECK (P_Id>0)
35. The DEFAULT constraint is used to insert a
default value into a column.
CREATE TABLE Persons ( P_Id int NOT NULL,
City varchar(255) DEFAULT 'Sandnes' )
ALTER TABLE Persons WHERE City SET
DEFAULT 'SANDNES'
36. An index can be created in a table to find
data more quickly and efficiently.
CREATE INDEX index_name ON table_name
(column_name);
CREATE UNIQUE INDEX index_name ON
table_name (column_name);
Drop INDEX index_name FROM TABLE_NAME;
37. The JOIN keyword is used in an SQL statement to
query data from two or more tables, based on a
relationship between certain columns in these
tables.
Different SQL JOINS
INNER JOIN
OUTER JOIN
FULL JOIN
38. The INNER JOIN keyword return rows when
there is at least one match in both tables.
SELECT column_name(s) FROM table_name1
INNER JOIN table_name2 ON
table_name1.column_name=table_name2.col
umn_name
39. The LEFT JOIN keyword returns all rows from
the left table (table_name1), even if there
are no matches in the right table
(table_name2).
SELECT column_name(s) FROM table_name1
LEFT JOIN table_name2 ON
table_name1.column_name=table_name2.col
umn_name
40. The RIGHT JOIN keyword Return all rows
from the right table (table_name2), even if
there are no matches in the left table
(table_name1).
SELECT column_name(s) FROM table_name1
RIGHT JOIN table_name2 ON
table_name1.column_name=table_name2.col
umn_name
41. The FULL JOIN keyword return rows when
there is a match in one of the tables.
SELECT column_name(s) FROM table_name1
FULL JOIN table_name2 ON
table_name1.column_name=table_name2.col
umn_name
42. SQL aggregate functions return a single value,
calculated from values in a column.
AVG() - Returns the average value
COUNT() - Returns the number of rows
FIRST() - Returns the first value
LAST() - Returns the last value
MAX() - Returns the largest value
MIN() - Returns the smallest value
SUM() - Returns the sum
43. SQL scalar functions return a single value, based
on the input value.
UCASE() - Converts a field to upper case
LCASE() - Converts a field to lower case
MID() - Extract characters from a text field
LEN() - Returns the length of a text field
ROUND() - Rounds a numeric field to the number
of decimals specified
NOW() - Returns the current system date and
time
FORMAT() - Formats how a field is to be
displayed
44. GROUP BY... was added to SQL because
aggregate functions (like SUM) return the
aggregate of all column values every time
they are called, and without the GROUP BY
function it was impossible to find the sum for
each individual group of column values.
45. SELECT column, SUM(column) FROM table GROUP
BY column;
EXAMPLE: SELECT Company, SUM(Amount) FROM
Sales;
SELECT Company, SUM(Amount) FROM Sales
GROUP BY Company;
SELECT column, SUM(column) FROM table GROUP
BY column HAVING SUM(column) condition value;
46. Statement Syntax
AND / OR SELECT column_name(s)
FROM table_name
WHERE condition
AND|OR condition
ALTER TABLE (add
column)*//
ALTER TABLE table_name
ADD column_name datatype
ALTER TABLE (drop
column)
ALTER TABLE table_name
DROP COLUMN column_name
AS (alias for column) SELECT column_name AS column_alias
FROM table_name
AS (alias for table) SELECT column_name
FROM table_name AS table_alias
BETWEEN SELECT column_name(s)
FROM table_name
WHERE column_name
BETWEEN value1 AND value2
CREATE DATABASE CREATE DATABASE database_name
CREATE INDEX CREATE INDEX index_name
ON table_name (column_name)
47. CREATE TABLE CREATE TABLE table_name
(
column_name1 data_type,
column_name2 data_type,
.......
)
CREATE UNIQUE INDEX CREATE UNIQUE INDEX
index_name
ON table_name (column_name)
CREATE VIEW CREATE VIEW view_name AS
SELECT column_name(s)
FROM table_name
WHERE condition
DELETE FROM DELETE FROM table_name
(Note: Deletes the entire table!!)
or
DELETE FROM table_name
WHERE condition
DROP DATABASE DROP DATABASE database_name
DROP INDEX DROP INDEX table_name.index_name
DROP TABLE DROP TABLE table_name
48. GROUP BY SELECT
column_name1,SUM(column_name2)
FROM table_name
GROUP BY column_name1
HAVING SELECT
column_name1,SUM(column_name2)
FROM table_name
GROUP BY column_name1
HAVING SUM(column_name2)
condition value
IN SELECT column_name(s)
FROM table_name
WHERE column_name
IN (value1,value2,..)
INSERT INTO INSERT INTO table_name
VALUES (value1, value2,....)
LIKE SELECT column_name(s)
FROM table_name
WHERE column_name
LIKE pattern
49. ORDER BY SELECT column_name(s)
FROM table_name
ORDER BY column_name
[ASC|DESC]
SELECT SELECT column_name(s)
FROM table_name
SELECT * SELECT *
FROM table_name
SELECT DISTINCT SELECT DISTINCT
column_name(s)
FROM table_name
SELECT INTO
(used to create backup copies
of tables)
SELECT *
INTO new_table_name
FROM original_table_name
or
SELECT column_name(s)
INTO new_table_name
FROM original_table_name
50. SELECT INTO
(used to create backup copies of tables)
SELECT *
INTO new_table_name
FROM original_table_name
or
SELECT column_name(s)
INTO new_table_name
FROM original_table_name
TRUNCATE TABLE
(deletes only the data inside the table)
TRUNCATE TABLE table_name
UPDATE UPDATE table_name
SET column_name=new_value
[, column_name=new_value]
WHERE column_name=some_value
WHERE SELECT column_name(s)
FROM table_name
WHERE condition