This document discusses data science as a career option and provides an overview of the roles of data analyst, data scientist, and data engineer. It notes that data analysts solve problems using existing tools and manage data quality, while data scientists are responsible for undirected research and strategic planning. Data engineers compile and install database systems. The document also outlines the typical salaries for each role and discusses the growing demand for data science skills. It provides recommendations for learning tools and resources to pursue a career in data science.
This presentation discusses the following topics:
Basic features of R
Exploring R GUI
Data Frames & Lists
Handling Data in R Workspace
Reading Data Sets & Exporting Data from R
Manipulating & Processing Data in R
How to process the quantitative data collected through field survey with structured interview schedule with the help of MsS=s? This paper answers this question.
Entity type
Entity sets
Attributes and keys
Relationship model
Mapping Constraints
The ER Model
Cardinality Constraints
Generalization, Specialization and Aggregation
ER Diagram & Database design with the ER Model
Introduction
Relational Model
Concepts
Characteristics
Top-K Dominating Queries on Incomplete Data with Prioritiesijtsrd
Top-K dominating query returns the k objects that are dominated in a dataset. Finding dominated elements on incomplete dataset is more complicated than in case of complete dataset. In the real- time datasets the dataset can be incomplete due to various reasons such as data loss, privacy preservation or awareness problem etc. In this paper we aims to find top-k elements from an incomplete dataset by providing priority values to each dimension in the data object. Skyline based algorithm is applied for that purpose. Since the priority value is used while determining the dominance this method return the most suitable and efficient result than other previous methods. The output will be more preferable according to the users purpose. Dr. Prabha Shreeraj Nair | Prof. Dr. G. K. Awari"Top-K Dominating Queries on Incomplete Data with Priorities" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-2 | Issue-1 , December 2017, URL: http://www.ijtsrd.com/papers/ijtsrd7056.pdf http://www.ijtsrd.com/computer-science/other/7056/top-k-dominating-queries-on-incomplete--data-with-priorities/dr-prabha-shreeraj-nair
This presentation discusses the following topics:
Basic features of R
Exploring R GUI
Data Frames & Lists
Handling Data in R Workspace
Reading Data Sets & Exporting Data from R
Manipulating & Processing Data in R
How to process the quantitative data collected through field survey with structured interview schedule with the help of MsS=s? This paper answers this question.
Entity type
Entity sets
Attributes and keys
Relationship model
Mapping Constraints
The ER Model
Cardinality Constraints
Generalization, Specialization and Aggregation
ER Diagram & Database design with the ER Model
Introduction
Relational Model
Concepts
Characteristics
Top-K Dominating Queries on Incomplete Data with Prioritiesijtsrd
Top-K dominating query returns the k objects that are dominated in a dataset. Finding dominated elements on incomplete dataset is more complicated than in case of complete dataset. In the real- time datasets the dataset can be incomplete due to various reasons such as data loss, privacy preservation or awareness problem etc. In this paper we aims to find top-k elements from an incomplete dataset by providing priority values to each dimension in the data object. Skyline based algorithm is applied for that purpose. Since the priority value is used while determining the dominance this method return the most suitable and efficient result than other previous methods. The output will be more preferable according to the users purpose. Dr. Prabha Shreeraj Nair | Prof. Dr. G. K. Awari"Top-K Dominating Queries on Incomplete Data with Priorities" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-2 | Issue-1 , December 2017, URL: http://www.ijtsrd.com/papers/ijtsrd7056.pdf http://www.ijtsrd.com/computer-science/other/7056/top-k-dominating-queries-on-incomplete--data-with-priorities/dr-prabha-shreeraj-nair
Abstract data types (adt) intro to data structure part 2Self-Employed
Abstract Data type (ADT), Related to DATA STRUCTURE and ALGORITHMS STACK QUEUE ARRAY LINKED LIST ALGORITHMS AND INSERTION DELETION MERGE TRAVERSE MODIFY AND OTHER related operation in the algorithms of stack queue array and linked list as an ADT type
This hands-on R course will demonstrate a variety of statistical procedures using the open-source statistical software program, R. Emphasis is on regression modeling using the Zelig package.
Workshop materials including example data sets and R scripts are available from http://projects.iq.harvard.edu/rtc/r-stats
A statistical and schema independent approach to determine equivalent properties between linked datasets. The approach utilizes interlinking between datasets and property extensions to understand the equivalence of properties.
this presentation is an introduction to R programming language.we will talk about usage, history, data structure and feathers of R programming language.
Download Complete Material - https://www.instamojo.com/prashanth_ns/
This RDBMS (Relational Database Management System) contains 9 Units and each unit contains 40 to 50 slides in it.
Contents…
• Define a Database Management System
• Describe the types of data models
• Create an entity-relationship model
• List the types of relationships between entities
• Define a Relational Database Management System
• Describe the operators that work on relations
• Identify tips of logical database design
• Map an ER diagram to a table
• Describe data redundancy
• Describe the first, second, and third normal forms
• Describe the Boyce-Codd Normal Form
• Appreciate the need for denormalization.
Data Patterns - A Native Open Source Data Profiling Tool for HPCC SystemsHPCC Systems
As part of the 2018 HPCC Systems Summit Community Day event:
Data profiling is a technique used to uncover information about a source of data. Information such as the shape or accuracy of the data is extremely useful during data discovery (when you're exploring a new dataset) or when verifying that updated data appears to be a valid replacement for old data. DataPatterns, an open sourced ECL bundle for HPCC Systems, offers a native function macro for data profiling that is easy to use and supports a number of options for tuning the profile result. This talk will briefly explore the bundle's profile feature and options.
Dan Camper has been with LexisNexis Risk for four years and is a Senior Architect in the Solutions Lab Group. He has worked for Apple and Dun & Bradstreet, and he ran his own custom programming shop for a decade. He's been writing software professionally for over 35 years and has worked on a myriad of systems, using a lot of different programming languages. He thinks ECL is pretty neat.
Data Science - Part II - Working with R & R studioDerek Kane
This tutorial will go through a basic primer for individuals who want to get started with predictive analytics through downloading the open source (FREE) language R. I will go through some tips to get up and started and building predictive models ASAP.
In this slide I answer the basic questions about machine learning like:
What is Machine Learning?
What are the types of machine learning?
How to deal with data?
How to test model performance?
Abstract data types (adt) intro to data structure part 2Self-Employed
Abstract Data type (ADT), Related to DATA STRUCTURE and ALGORITHMS STACK QUEUE ARRAY LINKED LIST ALGORITHMS AND INSERTION DELETION MERGE TRAVERSE MODIFY AND OTHER related operation in the algorithms of stack queue array and linked list as an ADT type
This hands-on R course will demonstrate a variety of statistical procedures using the open-source statistical software program, R. Emphasis is on regression modeling using the Zelig package.
Workshop materials including example data sets and R scripts are available from http://projects.iq.harvard.edu/rtc/r-stats
A statistical and schema independent approach to determine equivalent properties between linked datasets. The approach utilizes interlinking between datasets and property extensions to understand the equivalence of properties.
this presentation is an introduction to R programming language.we will talk about usage, history, data structure and feathers of R programming language.
Download Complete Material - https://www.instamojo.com/prashanth_ns/
This RDBMS (Relational Database Management System) contains 9 Units and each unit contains 40 to 50 slides in it.
Contents…
• Define a Database Management System
• Describe the types of data models
• Create an entity-relationship model
• List the types of relationships between entities
• Define a Relational Database Management System
• Describe the operators that work on relations
• Identify tips of logical database design
• Map an ER diagram to a table
• Describe data redundancy
• Describe the first, second, and third normal forms
• Describe the Boyce-Codd Normal Form
• Appreciate the need for denormalization.
Data Patterns - A Native Open Source Data Profiling Tool for HPCC SystemsHPCC Systems
As part of the 2018 HPCC Systems Summit Community Day event:
Data profiling is a technique used to uncover information about a source of data. Information such as the shape or accuracy of the data is extremely useful during data discovery (when you're exploring a new dataset) or when verifying that updated data appears to be a valid replacement for old data. DataPatterns, an open sourced ECL bundle for HPCC Systems, offers a native function macro for data profiling that is easy to use and supports a number of options for tuning the profile result. This talk will briefly explore the bundle's profile feature and options.
Dan Camper has been with LexisNexis Risk for four years and is a Senior Architect in the Solutions Lab Group. He has worked for Apple and Dun & Bradstreet, and he ran his own custom programming shop for a decade. He's been writing software professionally for over 35 years and has worked on a myriad of systems, using a lot of different programming languages. He thinks ECL is pretty neat.
Data Science - Part II - Working with R & R studioDerek Kane
This tutorial will go through a basic primer for individuals who want to get started with predictive analytics through downloading the open source (FREE) language R. I will go through some tips to get up and started and building predictive models ASAP.
In this slide I answer the basic questions about machine learning like:
What is Machine Learning?
What are the types of machine learning?
How to deal with data?
How to test model performance?
Data Science Job ready #DataScienceInterview Question and Answers 2022 | #Dat...Rohit Dubey
How Much Do Data Scientists Make?
The demand and salary for data scientists tend to be higher than most other ITES jobs. Experience is one of the key factors in determining the salary range of a data science professional.
According to Glassdoor, a Data Scientist in the United States earns an annual average of USD 117,212, and the same site reports that Data Scientists in India make a yearly average of ₹1,000,000.
Data Scientist Career Path
Data Science is currently considered one of the most lucrative careers available. Companies across all major industries/sectors have data scientist requirements to help them gain valuable insights from big data. There is a sharp growth in demand for highly skilled data science professionals who can straddle the business and IT worlds.
The career path to becoming a data scientist isn’t clearly defined since this is a relatively new profession. People from different backgrounds like mathematics, statistics, computer science or economics, end up in data science.
The major designations for data science professionals are:
Data Analyst
Data Scientist (entry-level)
Associate data scientist
Data Scientist (senior-level)
Product Manager
Lead data scientist
Director/VP/SVP
That was all about Data Scientist Job Description.
Become a Data Scientist Today!
In this write-up, we covered the Data Scientist job description in detail. Irrespective of which location you are in, there is no dearth of jobs for skillful data scientists. A career in data science is a rewarding journey to embark on, especially in the finance, retail, and e-commerce sectors. Jobs are also available with Government departments, universities and research institutes, telecoms, transports, the list goes on.
This video covers
Introductory Questions
Data Science Introduction
Data Science Technical Interview QnA :
#Excel
#SQL
#Python3
#MachineLearning
#DataAnalyticstechnical Interview
#DataScienceProjects
#coder #statistics #datamining #dataanalyst #code #engineering #linux #codinglife #cloudcomputing #businessintelligence #robotics #softwaredeveloper #automation #cloud #neuralnetworks #sql #science #softwareengineer #digitaltransformation #computer #daysofcode #coders #bigdataanalytics #programminglife #dataviz #html #digitalmarketing #devops #datasciencetraining #dataprotection
#rohitdubey
#teachtechtoe
#datascience #datasciencetraining #datasciencejobs #datasciencecourse #datasciencenigeria #datasciencebootcamp #datascienceworkshop #datasciencecareers #datasciencestudent #datascienceproject #datascienceforall #datasciencetraininginpatelnagar#datasciencetrainingindelhi
Data science is an interdisciplinary field that uses algorithms, procedures, and processes to examine large amounts of data in order to uncover hidden patterns, generate insights, and direct decision making.
Data science is an interdisciplinary field that uses algorithms, procedures, and processes to examine large amounts of data in order to uncover hidden patterns, generate insights, and direct decision making.
This brief work is aimed in the direction of basics of data sciences and model building with focus on implementation on fairly sizable dataset. It focuses on cleaning the data, visualization, EDA, feature scaling, feature normalization, k-nearest neighbor, logistic regression, random forests, cross validation without delving too deep into any of them but giving a start to a new learner.
To succeed as a data scientist, you should follow a structured path known as the “Data Science Roadmap.” This path outlines foundational knowledge in math and programming. Data manipulation and visualization, exploratory data analysis. Machine learning, deep learning, and advanced topics such as natural language processing and time series analysis. Following this roadmap can help you acquire the skills and knowledge needed to excel in this rapidly growing field.
Becoming a successful data scientist requires a unique combination of technical skills, business acumen, and critical thinking ability. To achieve your career goals in this field, you need a structured plan or a data science roadmap that outlines the skills, tools, and knowledge required to succeed. In this blog, we’ll take a closer look at what a data science roadmap is, why it’s important, and how to create one that works for you.
At its core, It is a structured plan that outlines the skills, tools, and knowledge required to become a successful data scientist. It serves as a guidepost to help individuals navigate the complex landscape of data science and provides a clear path towards achieving their career objectives.
The Incredible Disappearing Data ScientistRebecca Bilbro
The last decade saw advances in compute power combine with an avalanche of open source software development, resulting in a revolution in machine learning and scalable analytics. “Data science” and “data product” are now household terms. This led to a new job description, the Data Scientist, which quickly became one of the most significant, exciting, and misunderstood jobs of the 21st century. One part statistician, one part computer scientist, and one part domain expert, data scientists seem poised to become the most pivotal value creators of the information age. And yet, danger (supposedly) lies ahead: human decisions are increasingly outsourced to algorithms of questionable ethical design; we’re putting everything on the blockchain; and perhaps most disturbingly, data science salaries are dropping precipitously as new graduates and Machine Learning as a Service (MLaaS) offerings flood the market. As we move into a future where predictive analytics is no longer a differentiator but instead a core business function, will data scientists proliferate or be automated out of a job?
In this talk, one humble data scientist attempts to cut through the hype to present an alternate vision of what data science is and can become. If not the “Sexiest Job of the 21st Century" as the Harvard Business Review once quipped, what is it like to be a workaday data scientist? What problems are we solving? How do we integrate with mature engineering teams? How do we engage with clients and product owners? How do we deploy non-deterministic models in production? In particular, we’ll examine critical integration points — technological and otherwise — we are currently tackling, which will ultimately determine our success, and our viability, over the next 10 years.
Hierarchical Digital Twin of a Naval Power SystemKerry Sado
A hierarchical digital twin of a Naval DC power system has been developed and experimentally verified. Similar to other state-of-the-art digital twins, this technology creates a digital replica of the physical system executed in real-time or faster, which can modify hardware controls. However, its advantage stems from distributing computational efforts by utilizing a hierarchical structure composed of lower-level digital twin blocks and a higher-level system digital twin. Each digital twin block is associated with a physical subsystem of the hardware and communicates with a singular system digital twin, which creates a system-level response. By extracting information from each level of the hierarchy, power system controls of the hardware were reconfigured autonomously. This hierarchical digital twin development offers several advantages over other digital twins, particularly in the field of naval power systems. The hierarchical structure allows for greater computational efficiency and scalability while the ability to autonomously reconfigure hardware controls offers increased flexibility and responsiveness. The hierarchical decomposition and models utilized were well aligned with the physical twin, as indicated by the maximum deviations between the developed digital twin hierarchy and the hardware.
Using recycled concrete aggregates (RCA) for pavements is crucial to achieving sustainability. Implementing RCA for new pavement can minimize carbon footprint, conserve natural resources, reduce harmful emissions, and lower life cycle costs. Compared to natural aggregate (NA), RCA pavement has fewer comprehensive studies and sustainability assessments.
Forklift Classes Overview by Intella PartsIntella Parts
Discover the different forklift classes and their specific applications. Learn how to choose the right forklift for your needs to ensure safety, efficiency, and compliance in your operations.
For more technical information, visit our website https://intellaparts.com
CW RADAR, FMCW RADAR, FMCW ALTIMETER, AND THEIR PARAMETERSveerababupersonal22
It consists of cw radar and fmcw radar ,range measurement,if amplifier and fmcw altimeterThe CW radar operates using continuous wave transmission, while the FMCW radar employs frequency-modulated continuous wave technology. Range measurement is a crucial aspect of radar systems, providing information about the distance to a target. The IF amplifier plays a key role in signal processing, amplifying intermediate frequency signals for further analysis. The FMCW altimeter utilizes frequency-modulated continuous wave technology to accurately measure altitude above a reference point.
Cosmetic shop management system project report.pdfKamal Acharya
Buying new cosmetic products is difficult. It can even be scary for those who have sensitive skin and are prone to skin trouble. The information needed to alleviate this problem is on the back of each product, but it's thought to interpret those ingredient lists unless you have a background in chemistry.
Instead of buying and hoping for the best, we can use data science to help us predict which products may be good fits for us. It includes various function programs to do the above mentioned tasks.
Data file handling has been effectively used in the program.
The automated cosmetic shop management system should deal with the automation of general workflow and administration process of the shop. The main processes of the system focus on customer's request where the system is able to search the most appropriate products and deliver it to the customers. It should help the employees to quickly identify the list of cosmetic product that have reached the minimum quantity and also keep a track of expired date for each cosmetic product. It should help the employees to find the rack number in which the product is placed.It is also Faster and more efficient way.
Harnessing WebAssembly for Real-time Stateless Streaming PipelinesChristina Lin
Traditionally, dealing with real-time data pipelines has involved significant overhead, even for straightforward tasks like data transformation or masking. However, in this talk, we’ll venture into the dynamic realm of WebAssembly (WASM) and discover how it can revolutionize the creation of stateless streaming pipelines within a Kafka (Redpanda) broker. These pipelines are adept at managing low-latency, high-data-volume scenarios.
Understanding Inductive Bias in Machine LearningSUTEJAS
This presentation explores the concept of inductive bias in machine learning. It explains how algorithms come with built-in assumptions and preferences that guide the learning process. You'll learn about the different types of inductive bias and how they can impact the performance and generalizability of machine learning models.
The presentation also covers the positive and negative aspects of inductive bias, along with strategies for mitigating potential drawbacks. We'll explore examples of how bias manifests in algorithms like neural networks and decision trees.
By understanding inductive bias, you can gain valuable insights into how machine learning models work and make informed decisions when building and deploying them.
1. A Discussion on Data Science as a
Career option
-By Anshik
- Under Student Mentorship Prog.
2.
3. Overview
As data has multiplied, so has the ability to collect, organize, and analyze it. Data
storage is cheaper than ever, processing power is more massive than ever, and tools are
more accessible than ever to mine huge amount of available data for business
intelligence.
The McKinsey Global Institute predicted that by 2018 the U.S. could face a shortage of
1.5 million people who know how to leverage data analysis to make effective decisions.
Enter: you, taking stock of your three main career options: data analyst, data scientist,
and data engineer
4. Career options and difference between them
Data Analyst (1.6l - 8L)
Solve problems using
existing tools
No mathematical or
research background
required.
Manage quality of scraped
data, querying databases
and serve data as
visualization.
Data Scientist(3.5L - 18L)
Similar to data analyst in
many aspects.
Responsible for doing
undirected research and
tackle open-ended
problems and questions.
Data analyst summarizes
the past; a data scientist
strategizes for the future
Data Engineer(3L - 21L)
Does groundwork for the
former two.
Responsible for compiling
and installing database
systems, writing complex
queries, scaling to multiple
machines, and putting
disaster recovery systems
into place.
6. What do you think?
Data set that contains the
salaries of people who work
at an organization.
-- What questions can be
formed?
-- What Interpretations can
be made?
7. 1.Most of the
positions sought
Masters / PhD
students (especially
in statistics).
Lorem ipsum dolor sit amet,
consectetur adipiscing elit, sed do
eiusmod tempor incididunt
The competition:
● Lorem ipsum
● Dolor sit amet
8. 2.Learning from
MOOCs is not easy
and is
time-consuming
Lorem ipsum dolor sit amet,
consectetur adipiscing elit, sed do
eiusmod tempor incididunt
The competition:
● Lorem ipsum
● Dolor sit amet
9. 3.Condense what
you know in
presentable
manner.
Lorem ipsum dolor sit amet,
consectetur adipiscing elit, sed do
eiusmod tempor incididunt
The competition:
● Lorem ipsum
● Dolor sit amet
11. Srikanth Velamakanni,CEO of CA
headquartered Fractal Analytics:
“In the next few years, the size of the
analytics market will evolve to at
least one-thirds of the global IT
market from the current one-tenths”
13. Key points
● Huge Job Opportunities & Meeting the Skill Gap
● Salary Aspects
● The Rise of Unstructured and Semistructured Data Analytics
● Used Everywhere
14. Total Enterprise Data Growth 2005-2015
The way we capture, store,
analyze, and distribute data
is transforming.
Deduplication,compression,
and analysis tools are
lowering costs.
15. Tools and
Resources
Lorem ipsum dolor sit amet,
consectetur adipiscing elit, sed do
eiusmod tempor incididunt
The competition:
● Lorem ipsum
● Dolor sit amet
16. Categories and Links
Books
ISLR, R for Dummies, Advanced R, Machine learning
for Hackers(Py), NLP with Python
Websites and Blogs
Analytics Vidhya, Rbloggers, Kaggle Scripts,
CrowdAnalytics, students.brown.edu, github.io
Statistics and Linear
Algebra
Inferential and Descriptive statistics by Udacity,
MSR sir’s Prob & stats Slide, Khan Acad(Lin.Alg)
Machine Learning
and AI
Andrew Ng's ML Class, John Hopkins Data Analysis,
Deepak Khemani(AI-nptel)
Data Storage and
Visualization
MongoDB(Udacity), D3.js documentation and wiki
17. 1 3 5 7 10 12 14 20
Timeline(Weeks)[Beginers]
Learn the
Language -
R/Python
Start Doing
Hackathons/Pet Projects
Practice the
Langauge, Finish
Intro in ML
Do more advance
ML, start optimizing
your code.Start
reading git commits
18. Intro To ML & R
Installing Packages :-
To install a package, use the install.packages() function. Once a package is installed, it must be loaded
into your current R session before being used using library() or require(). Think of this as taking the
book off of the shelf and opening it up to read.
TIP :- Use require function for loading a package as it throws false if package is not found.
Data Types :-
R has a number of basic data types.
1. Numeric :- Also known as Double. The default type when dealing with numbers.
Examples: 1, 1.0, 42.5
2. Integer: - Examples: 1L, 2L, 42L
3. Complex : - Example: 4 + 2i
4. Logical : - Two possible values: TRUE and FALSE, you can also use T and F, but this is not
recommended.
NA is also considered logical.
5. Character :- Examples: "a", "Statistics", "1 plus 2."
19.
20. R Object oriented System
S3
Lacks formal definition
Objects are created by
setting the class attribute
Attributes are accessed
using $
Methods belong to generic
function
Follows copy-on-modify
semantics
S4
Class defined using
setClass()
Objects are created using
new()
Attributes are accessed
using @
Methods belong to generic
function
Follows copy-on-modify
semantics
Reference Classes
Class defined using
setRefClass()
Objects are created using
generator functions
Attributes are accessed
using $
Methods belong to the
class
Does not follow
copy-on-modify semantics
22. We will use inbuilt Cars dataset in R-base
Data gathered during the 1920s about the speed of cars
and the resulting distance it takes for the car to come to a
stop.
Objective :- How far a car travels before stopping, when
traveling at a certain speed?
23. What sort of function should we use for f(X)[Y=f(X) +e) for
the cars data?
- A Horizontal Line?
We see this doesn’t seem to do a very good job. Many of
the data points are very far from the orange line
representing cc . This is an example of underfitting.
- Make f(x) depend on x
- As speed increases, the distance required to come to a
stop increases. There is still some variation about this
line, but it seems to capture the overall trend.
24.
25. Assumptions of Linear Regression
LINE
Linear. The relationship between Y and x is linear, of the form β0+β1x .
Independent. The errors ϵ are independent.
Normal. The errors, ϵ are normally distributed. That is the “error” around the line follows a
normal distribution.
Equal Variance. At each value of x , the variance of Y is the same, σ2 .
We have to find a line that minimize sum of all squared distances from point to line.
26. lm()
stop_dist_model = lm(dist ~ speed, data = cars)
The abline() function is used to add lines of the form a+bx to a plot. (Hence abline.) When we give it
stop_dist_model as an argument, it automatically extracts the regression coefficient estimates ( β̂0 and
β̂1) and uses them as the slope and intercept of the line. Here we also use lwd to modify the width of
the line, as well as col to modify the color of the line.
lm() function returns an object of class lm()
We can access the members using $ operator
> names(stop_dist_model)
> stop_dist_model$residuals
Use summary() to summarize the output for linear regression.The summary() command also returns a
list, and we can again use names() to learn what about the elements of this list.
> names(summary(stop_dist_model))
> summary(stop_dist_model)$r.squared
Use predict function to predict output for certain input values
> predict(stop_dist_model, data.frame(speed = c(8, 21, 50)))
27. Thank You
-Anshik
Lorem ipsum dolor sit amet,
consectetur adipiscing elit, sed do
eiusmod tempor incididunt
The competition:
● Lorem ipsum
● Dolor sit amet
8826274098 (Watsapp)