Big data technologies can be categorized as operational or analytical. Operational technologies deal with raw daily data like online transactions, while analytical technologies analyze operational data for business decisions. The document describes several examples of big data technologies categorized by data storage, mining, analytics, and visualization. Common storage technologies include Hadoop, MongoDB, and Cassandra. Data mining tools include Presto, RapidMiner, and Elasticsearch. Analytics are performed using Apache Kafka, Splunk, KNIME, Spark, and R. Popular visualization technologies are Tableau and Plotly.
Here is my seminar presentation on No-SQL Databases. it includes all the types of nosql databases, merits & demerits of nosql databases, examples of nosql databases etc.
For seminar report of NoSQL Databases please contact me: ndc@live.in
Apache Spark is a In Memory Data Processing Solution that can work with existing data source like HDFS and can make use of your existing computation infrastructure like YARN/Mesos etc. This talk will cover a basic introduction of Apache Spark with its various components like MLib, Shark, GrpahX and with few examples.
Introduction To Big Data with Hadoop and Spark - For Batch and Real Time Proc...Agile Testing Alliance
Introduction To Big Data with Hadoop and Spark - For Batch and Real Time Processing by "Sampat Kumar" from "Harman". The presentation was done at #doppa17 DevOps++ Global Summit 2017. All the copyrights are reserved with the author
Spark + Parquet In Depth: Spark Summit East Talk by Emily Curtin and Robbie S...Spark Summit
What if you could get the simplicity, convenience, interoperability, and storage niceties of an old-fashioned CSV with the speed of a NoSQL database and the storage requirements of a gzipped file? Enter Parquet.
At The Weather Company, Parquet files are a quietly awesome and deeply integral part of our Spark-driven analytics workflow. Using Spark + Parquet, we’ve built a blazing fast, storage-efficient, query-efficient data lake and a suite of tools to accompany it.
We will give a technical overview of how Parquet works and how recent improvements from Tungsten enable SparkSQL to take advantage of this design to provide fast queries by overcoming two major bottlenecks of distributed analytics: communication costs (IO bound) and data decoding (CPU bound).
Introduction to memcached, a caching service designed for optimizing performance and scaling in the web stack, seen from perspective of MySQL/PHP users. Given for 2nd year students of professional bachelor in ICT at Kaho St. Lieven, Gent.
In this presentation, I take a deep dive into the InfluxDB open source storage engine. More than just a single storage engine, InfluxDB is two engines in one: the first for time series data and the second, an index for metadata. I'll delve into the optimizations for achieving high write throughput, compression and fast reads for both the raw time series data and the metadata.
MongoDB is a popular NoSQL database. This presentation was delivered during a workshop.
First it talks about NoSQL databases, shift in their design paradigm, focuses a little more on document based NoSQL databases and tries drawing some parallel from SQL databases.
Second part, is for hands-on session of MongoDB using mongo shell. But the slides help very less.
At last it touches advance topics like data replication for disaster recovery and handling big data using map-reduce as well as Sharding.
Top 10 Mistakes When Migrating From Oracle to PostgreSQLJim Mlodgenski
As more and more people are moving to PostgreSQL from Oracle, a pattern of mistakes is emerging. They can be caused by the tools being used or just not understanding how PostgreSQL is different than Oracle. In this talk we will discuss the top mistakes people generally make when moving to PostgreSQL from Oracle and what the correct course of action.
Case Study: Stream Processing on AWS using Kappa ArchitectureJoey Bolduc-Gilbert
In the summer of 2016, XpertSea decided to migrate its operations to AWS and to build a data processing system that is able to scale to the extent of our ambitions. Come see how we built our platform inspired by Kappa Architecture, able to support connected devices located all-around the globe and state-of-the-art machine learning algorithms.
Webinar: Machine Learning para MicrocontroladoresEmbarcados
Neste webinar, serão apresentados conceitos sobre inteligência artificial, assim como ferramentas disponíveis para o desenvolvimento integradas ao MPLAB X e ao Harmony 3 e demonstração de um sistema de detecção de anomalia utilizando um microcontrolador da família ATSAMD21 (ARM Cortex M0+).
Here is my seminar presentation on No-SQL Databases. it includes all the types of nosql databases, merits & demerits of nosql databases, examples of nosql databases etc.
For seminar report of NoSQL Databases please contact me: ndc@live.in
Apache Spark is a In Memory Data Processing Solution that can work with existing data source like HDFS and can make use of your existing computation infrastructure like YARN/Mesos etc. This talk will cover a basic introduction of Apache Spark with its various components like MLib, Shark, GrpahX and with few examples.
Introduction To Big Data with Hadoop and Spark - For Batch and Real Time Proc...Agile Testing Alliance
Introduction To Big Data with Hadoop and Spark - For Batch and Real Time Processing by "Sampat Kumar" from "Harman". The presentation was done at #doppa17 DevOps++ Global Summit 2017. All the copyrights are reserved with the author
Spark + Parquet In Depth: Spark Summit East Talk by Emily Curtin and Robbie S...Spark Summit
What if you could get the simplicity, convenience, interoperability, and storage niceties of an old-fashioned CSV with the speed of a NoSQL database and the storage requirements of a gzipped file? Enter Parquet.
At The Weather Company, Parquet files are a quietly awesome and deeply integral part of our Spark-driven analytics workflow. Using Spark + Parquet, we’ve built a blazing fast, storage-efficient, query-efficient data lake and a suite of tools to accompany it.
We will give a technical overview of how Parquet works and how recent improvements from Tungsten enable SparkSQL to take advantage of this design to provide fast queries by overcoming two major bottlenecks of distributed analytics: communication costs (IO bound) and data decoding (CPU bound).
Introduction to memcached, a caching service designed for optimizing performance and scaling in the web stack, seen from perspective of MySQL/PHP users. Given for 2nd year students of professional bachelor in ICT at Kaho St. Lieven, Gent.
In this presentation, I take a deep dive into the InfluxDB open source storage engine. More than just a single storage engine, InfluxDB is two engines in one: the first for time series data and the second, an index for metadata. I'll delve into the optimizations for achieving high write throughput, compression and fast reads for both the raw time series data and the metadata.
MongoDB is a popular NoSQL database. This presentation was delivered during a workshop.
First it talks about NoSQL databases, shift in their design paradigm, focuses a little more on document based NoSQL databases and tries drawing some parallel from SQL databases.
Second part, is for hands-on session of MongoDB using mongo shell. But the slides help very less.
At last it touches advance topics like data replication for disaster recovery and handling big data using map-reduce as well as Sharding.
Top 10 Mistakes When Migrating From Oracle to PostgreSQLJim Mlodgenski
As more and more people are moving to PostgreSQL from Oracle, a pattern of mistakes is emerging. They can be caused by the tools being used or just not understanding how PostgreSQL is different than Oracle. In this talk we will discuss the top mistakes people generally make when moving to PostgreSQL from Oracle and what the correct course of action.
Case Study: Stream Processing on AWS using Kappa ArchitectureJoey Bolduc-Gilbert
In the summer of 2016, XpertSea decided to migrate its operations to AWS and to build a data processing system that is able to scale to the extent of our ambitions. Come see how we built our platform inspired by Kappa Architecture, able to support connected devices located all-around the globe and state-of-the-art machine learning algorithms.
Webinar: Machine Learning para MicrocontroladoresEmbarcados
Neste webinar, serão apresentados conceitos sobre inteligência artificial, assim como ferramentas disponíveis para o desenvolvimento integradas ao MPLAB X e ao Harmony 3 e demonstração de um sistema de detecção de anomalia utilizando um microcontrolador da família ATSAMD21 (ARM Cortex M0+).
This talk given at the Hadoop Summit in San Jose on June 28, 2016, analyzes a few major trends in Big Data analytics.
These are a few takeaways from this talk:
- Adopt Apache Beam for easier development and portability between Big Data Execution Engines.
- Adopt stream analytics for faster time to insight, competitive advantages and operational efficiency.
- Accelerate your Big Data applications with In-Memory open source tools.
- Adopt Rapid Application Development of Big Data applications: APIs, Notebooks, GUIs, Microservices…
- Have Machine Learning part of your strategy or passively watch your industry completely transformed!
- How to advance your strategy for hybrid integration between cloud and on-premise deployments?
In today’s context, the big data market is rapidly undergoing contortions that define market maturity, such as consolidation. Big data refers to large volumes of data. This can be both structured and unstructured data. Big data is data that is huge in size and grows exponentially with time. As the data is too large and complex, traditional data management tools are not sufficient for storing or processing it efficiently. But analyzing big data is crucial to know the patterns and trends to be adopted to improve your business.
Big Data Tools: A Deep Dive into Essential ToolsFredReynolds2
Today, practically every firm uses big data to gain a competitive advantage in the market. With this in mind, freely available big data tools for analysis and processing are a cost-effective and beneficial choice for enterprises. Hadoop is the sector’s leading open-source initiative and big data tidal roller. Moreover, this is not the final chapter! Numerous other businesses pursue Hadoop’s free and open-source path.
There are many useful Data Mining tools available.
The following is a compiled collection of top handpicked Data Mining tools with their prominent features. The reference list includes both open source and commercial resources.
https://www.datatobiz.com/blog/data-mining-tools/
Top 10 Data analytics tools to look for in 2021Mobcoder
This write-up has surrounded the top 10 tools used by data analysts, architects, scientists, and other professionals. Each tool has some specific feature that makes it an ideal fit for a specific task. So choose wisely depending on your business need, type of data, the volume of information, experience in analytical thinking.
Memory Management in BigData: A Perpective Viewijtsrd
The requirement to perform complicated statistic analysis of big data by institutions of engineering, scientific research, health care, commerce, banking and computer research is immense. However, the limitations of the widely used current desktop software like R, excel, minitab and spss gives a researcher limitation to deal with big data and big data analytic tools like IBM BigInsight, HP Vertica, SAP HANA & Pentaho come at an overpriced license. Apache Hadoop is an open source distributed computing framework that uses commodity hardware. With this project, I intend to collaborate Apache Hadoop and R software to develop an analytic platform that stores big data (using open source Apache Hadoop) and perform statistical analysis (using open source R software).Due to the limitations of vertical scaling of computer unit, data storage is handled by several machines and so analysis becomes distributed over all these machines. Apache Hadoop is what comes handy in this environment. To store massive quantities of data as required by researchers, we could use commodity hardware and perform analysis in distributed environment. Bhavna Bharti | Prof. Avinash Sharma"Memory Management in BigData: A Perpective View" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-2 | Issue-4 , June 2018, URL: http://www.ijtsrd.com/papers/ijtsrd14436.pdf http://www.ijtsrd.com/engineering/computer-engineering/14436/memory-management-in-bigdata-a-perpective-view/bhavna-bharti
A brief intro on the idea of what is Big Data and it's potential. This is primarily a basic study & I have quoted the source of infographics, stats & text at the end. If I have missed any reference due to human error & you recognize another source, please mention.
I have collected information for the beginners to provide an overview of big data and hadoop which will help them to understand the basics and give them a Start-Up.
Student information management system project report ii.pdfKamal Acharya
Our project explains about the student management. This project mainly explains the various actions related to student details. This project shows some ease in adding, editing and deleting the student details. It also provides a less time consuming process for viewing, adding, editing and deleting the marks of the students.
Explore the innovative world of trenchless pipe repair with our comprehensive guide, "The Benefits and Techniques of Trenchless Pipe Repair." This document delves into the modern methods of repairing underground pipes without the need for extensive excavation, highlighting the numerous advantages and the latest techniques used in the industry.
Learn about the cost savings, reduced environmental impact, and minimal disruption associated with trenchless technology. Discover detailed explanations of popular techniques such as pipe bursting, cured-in-place pipe (CIPP) lining, and directional drilling. Understand how these methods can be applied to various types of infrastructure, from residential plumbing to large-scale municipal systems.
Ideal for homeowners, contractors, engineers, and anyone interested in modern plumbing solutions, this guide provides valuable insights into why trenchless pipe repair is becoming the preferred choice for pipe rehabilitation. Stay informed about the latest advancements and best practices in the field.
Quality defects in TMT Bars, Possible causes and Potential Solutions.PrashantGoswami42
Maintaining high-quality standards in the production of TMT bars is crucial for ensuring structural integrity in construction. Addressing common defects through careful monitoring, standardized processes, and advanced technology can significantly improve the quality of TMT bars. Continuous training and adherence to quality control measures will also play a pivotal role in minimizing these defects.
NO1 Uk best vashikaran specialist in delhi vashikaran baba near me online vas...Amil Baba Dawood bangali
Contact with Dawood Bhai Just call on +92322-6382012 and we'll help you. We'll solve all your problems within 12 to 24 hours and with 101% guarantee and with astrology systematic. If you want to take any personal or professional advice then also you can call us on +92322-6382012 , ONLINE LOVE PROBLEM & Other all types of Daily Life Problem's.Then CALL or WHATSAPP us on +92322-6382012 and Get all these problems solutions here by Amil Baba DAWOOD BANGALI
#vashikaranspecialist #astrologer #palmistry #amliyaat #taweez #manpasandshadi #horoscope #spiritual #lovelife #lovespell #marriagespell#aamilbabainpakistan #amilbabainkarachi #powerfullblackmagicspell #kalajadumantarspecialist #realamilbaba #AmilbabainPakistan #astrologerincanada #astrologerindubai #lovespellsmaster #kalajaduspecialist #lovespellsthatwork #aamilbabainlahore#blackmagicformarriage #aamilbaba #kalajadu #kalailam #taweez #wazifaexpert #jadumantar #vashikaranspecialist #astrologer #palmistry #amliyaat #taweez #manpasandshadi #horoscope #spiritual #lovelife #lovespell #marriagespell#aamilbabainpakistan #amilbabainkarachi #powerfullblackmagicspell #kalajadumantarspecialist #realamilbaba #AmilbabainPakistan #astrologerincanada #astrologerindubai #lovespellsmaster #kalajaduspecialist #lovespellsthatwork #aamilbabainlahore #blackmagicforlove #blackmagicformarriage #aamilbaba #kalajadu #kalailam #taweez #wazifaexpert #jadumantar #vashikaranspecialist #astrologer #palmistry #amliyaat #taweez #manpasandshadi #horoscope #spiritual #lovelife #lovespell #marriagespell#aamilbabainpakistan #amilbabainkarachi #powerfullblackmagicspell #kalajadumantarspecialist #realamilbaba #AmilbabainPakistan #astrologerincanada #astrologerindubai #lovespellsmaster #kalajaduspecialist #lovespellsthatwork #aamilbabainlahore #Amilbabainuk #amilbabainspain #amilbabaindubai #Amilbabainnorway #amilbabainkrachi #amilbabainlahore #amilbabaingujranwalan #amilbabainislamabad
Hybrid optimization of pumped hydro system and solar- Engr. Abdul-Azeez.pdffxintegritypublishin
Advancements in technology unveil a myriad of electrical and electronic breakthroughs geared towards efficiently harnessing limited resources to meet human energy demands. The optimization of hybrid solar PV panels and pumped hydro energy supply systems plays a pivotal role in utilizing natural resources effectively. This initiative not only benefits humanity but also fosters environmental sustainability. The study investigated the design optimization of these hybrid systems, focusing on understanding solar radiation patterns, identifying geographical influences on solar radiation, formulating a mathematical model for system optimization, and determining the optimal configuration of PV panels and pumped hydro storage. Through a comparative analysis approach and eight weeks of data collection, the study addressed key research questions related to solar radiation patterns and optimal system design. The findings highlighted regions with heightened solar radiation levels, showcasing substantial potential for power generation and emphasizing the system's efficiency. Optimizing system design significantly boosted power generation, promoted renewable energy utilization, and enhanced energy storage capacity. The study underscored the benefits of optimizing hybrid solar PV panels and pumped hydro energy supply systems for sustainable energy usage. Optimizing the design of solar PV panels and pumped hydro energy supply systems as examined across diverse climatic conditions in a developing country, not only enhances power generation but also improves the integration of renewable energy sources and boosts energy storage capacities, particularly beneficial for less economically prosperous regions. Additionally, the study provides valuable insights for advancing energy research in economically viable areas. Recommendations included conducting site-specific assessments, utilizing advanced modeling tools, implementing regular maintenance protocols, and enhancing communication among system components.
Immunizing Image Classifiers Against Localized Adversary Attacksgerogepatton
This paper addresses the vulnerability of deep learning models, particularly convolutional neural networks
(CNN)s, to adversarial attacks and presents a proactive training technique designed to counter them. We
introduce a novel volumization algorithm, which transforms 2D images into 3D volumetric representations.
When combined with 3D convolution and deep curriculum learning optimization (CLO), itsignificantly improves
the immunity of models against localized universal attacks by up to 40%. We evaluate our proposed approach
using contemporary CNN architectures and the modified Canadian Institute for Advanced Research (CIFAR-10
and CIFAR-100) and ImageNet Large Scale Visual Recognition Challenge (ILSVRC12) datasets, showcasing
accuracy improvements over previous techniques. The results indicate that the combination of the volumetric
input and curriculum learning holds significant promise for mitigating adversarial attacks without necessitating
adversary training.
Overview of the fundamental roles in Hydropower generation and the components involved in wider Electrical Engineering.
This paper presents the design and construction of hydroelectric dams from the hydrologist’s survey of the valley before construction, all aspects and involved disciplines, fluid dynamics, structural engineering, generation and mains frequency regulation to the very transmission of power through the network in the United Kingdom.
Author: Robbie Edward Sayers
Collaborators and co editors: Charlie Sims and Connor Healey.
(C) 2024 Robbie E. Sayers
2. What are Big Data Technologies?
• Actually, Big Data Technologies is the utilized software that
incorporates data mining, data storage, data sharing, and data
visualization, the comprehensive term embraces data, data
framework including tools and techniques used to investigate and
transform data.
• Big Data Technology can be defined as a Software-Utility that is
designed to Analyse, Process and Extract the information from an
extremely complex and large data sets which the Traditional Data
Processing Software could never deal with.
3. Types of Big Data Technologies:
• Big Data Technology is mainly classified into two types:
1. Operational Big Data Technologies
2. Analytical Big Data Technologies
4. Operational Big Data Technologies
• The Operational Big Data is all about the normal day to day data that we
generate. This could be the Online Transactions, Social Media, or the data
from a Particular Organisation etc. You can even consider this to be a kind
of Raw Data which is used to feed the Analytical Big Data Technologies.
• A few examples of Operational Big Data Technologies are as follows:
Online ticket bookings, which includes your Rail tickets, Flight tickets, movie tickets
etc.
Online shopping which is your Amazon, Flipkart, Walmart, Snap deal and many more.
Data from social media sites like Facebook, Instagram, what’s app and a lot more.
The employee details of any Multinational Company.
5. Analytical Big Data Technologies
• Analytical Big Data is like the advanced version of Big Data
Technologies. It is a little complex than the Operational Big Data. In
short, Analytical big data is where the actual performance part comes
into the picture and the crucial real-time business decisions are made
by analyzing the Operational Big Data.
• Few examples of Analytical Big Data Technologies are as follows:
Stock marketing
Carrying out the Space missions where every single bit of information is
crucial.
Weather forecast information.
Medical fields where a particular patients health status can be monitored.
6. Types of Big Data Technologies
• Top big data technologies are divided into 4 fields which are classified
as follows:
1. Data Storage
2. Data Mining
3. Data Analytics
4. Data Visualization
Source: javatpoint
7. Data Storage
1. Hadoop Framework was designed to store and process data in
a Distributed Data Processing Environment with commodity
hardware with a simple programming model. It can Store and
Analyse the data present in different machines with High Speeds
and Low Costs.
• Developed by: Apache Software Foundation in the year 2011 10th of Dec.
• Written in: JAVA
• Current stable version: Hadoop 3.11
8. Data Storage
2. MongoDB: The NoSQL Document Databases like MongoDB, offer a
direct alternative to the rigid schema used in Relational Databases.
This allows MongoDB to offer Flexibility while handling a wide
variety of Datatypes at large volumes and across Distributed
Architectures.
• Developed by: MongoDB in the year 2009 11th of Feb
• Written in: C++, Go, JavaScript, Python
• Current stable version: MongoDB 4.0.10
9. Data Storage
3. RainStor is a software company that developed a Database
Management System of the same name designed to Manage and
Analyse Big Data for large enterprises. It uses Deduplication
Techniques to organize the process of storing large amounts of data
for reference.
• Developed by: RainStor Software company in the year 2004.
• Works like: SQL
• Current stable version: RainStor 5.5
10. Data Storage
4. Hunk lets you access data in remote Hadoop Clusters through
virtual indexes and lets you use the Splunk Search Processing
Language to analyse your data. With Hunk, you can Report and
Visualize large amounts from your Hadoop and NoSQL data sources.
• Developed by: Splunk INC in the year 2013.
• Written in: JAVA
• Current stable version: Splunk Hunk 6.2
11. Data Storage
5. Cassandra: Cassandra forms a top choice among the list of popular
NoSQL databases which is a free and an open-source database,
which is distributed and has a wide columnar storage and can
efficiently handle data on large commodity clusters i.e. it is used to
provide high availability along with no single failure point.
• Among the list of main features includes the ones like distributed nature,
scalability, fault-tolerant mechanism, MapReduce support, tunable
consistency, query language property, supports multi data center replication
and eventual consistency.
12. Data Mining
1. Presto is an open source Distributed SQL Query Engine for
running Interactive Analytic Queries against data sources of all
sizes ranging from Gigabytes to Petabytes. Presto allows querying
data in Hive, Cassandra, Relational Databases and Proprietary Data
Stores.
• Developed by: Apache Foundation in the year 2013.
• Written in: JAVA
• Current stable version: Presto 0.22
13. Data Mining
2. RapidMiner is a Centralized solution that features a very powerful
and robust Graphical User Interface that enables users to Create,
Deliver, and maintain Predictive Analytics. It allows creating very
Advanced Workflows, Scripting support in several languages.
• Developed by: RapidMiner in the year 2001
• Written in: JAVA
• Current stable version: RapidMiner 9.2
14. Data Mining
3. Elasticsearch is a Search Engine based on the Lucene Library. It
provides a Distributed, MultiTenant-capable, Full-Text Search Engine
with an HTTP Web Interface and Schema-free JSON documents.
• Developed by: Elastic NV in the year 2012.
• Written in: JAVA
• Current stable version: ElasticSearch 7.1
15. Data Analytics
1. Apache Kafka is a Distributed Streaming platform. A streaming
platform has Three Key Capabilities that are as follows:
i. Publisher
ii. Subscriber
iii. Consumer
This is similar to a Message Queue or an Enterprise Messaging System.
• Developed by: Apache Software Foundation in the year 2011
• Written in: Scala, JAVA
• Current stable version: Apache Kafka 2.2.0
16. Data Analytics
2. Splunk captures, Indexes, and correlates Real-time data in a
Searchable Repository from which it can generate Graphs, Reports,
Alerts, Dashboards, and Data Visualizations. It is also used for
Application Management, Security and Compliance, as well as
Business and Web Analytics.
• Developed by: Splunk INC in the year 2014 6th May
• Written in: AJAX, C++, Python, XML
• Current stable version: Splunk 7.3
17. Data Analytics
3. KNIME allows users to visually create Data Flows, Selectively
execute some or All Analysis steps, and Inspect the Results, Models,
and Interactive views. KNIME is written in Java and based on Eclipse
and makes use of its Extension mechanism to add Plugins providing
Additional Functionality.
• Developed by: KNIME in the year 2008
• Written in: JAVA
• Current stable version: KNIME 3.7.2
18. Data Analytics
4. Spark provides In-Memory Computing capabilities to deliver Speed,
a Generalized Execution Model to support a wide variety of
applications, and Java, Scala, and Python APIs for ease of
development.
• Developed by: Apache Software Foundation
• Written in: Java, Scala, Python, R
• Current stable version: Apache Spark 2.4.3
19. Data Analytics
5. R is a Programming Language and free software environment
for Statistical Computing and Graphics. The R language is widely
used among Statisticians and Data Miners for developing Statistical
Software and majorly in Data Analysis.
• Developed by: R-Foundation in the year 2000 29th Feb
• Written in: Fortran
• Current stable version: R-3.6.0
20. Data Analytics
6. BlockChain is used in essential functions such as payment, escrow,
and title can also reduce fraud, increase financial privacy, speed up
transactions, and internationalize markets. BlockChain can be used
for achieving the following in a Business Network Environment:
Shared Ledger: Here we can append the Distributed System of records across a Business
network.
Smart Contract: Business terms are embedded in the transaction Database and Executed
with transactions.
Privacy: Ensuring appropriate Visibility, Transactions are Secure, Authenticated and
Verifiable
Consensus: All parties in a Business network agree to network verified transactions.
•Developed by: Bitcoin
•Written in: JavaScript, C++, Python
•Current stable version: Blockchain
4.0
21. Data Visualization
1. Tableau is a Powerful and Fastest growing Data Visualization tool
used in the Business Intelligence Industry. Data analysis is very fast
with Tableau and the Visualizations created are in the form of
Dashboards and Worksheets.
• Developed by: TableAU 2013 May 17th
• Written in: JAVA, C++, Python, C
• Current stable version: TableAU 8.2
22. Data Visualization
2. Plotly: Mainly used to make creating Graphs faster and more
efficient. API libraries for Python, R, MATLAB, Node.js,
Julia, and Arduino and a REST API. Plotly can also be used to style
Interactive Graphs with Jupyter notebook.
• Developed by: Plotly in the year 2012
• Written in: JavaScript
• Current stable version: Plotly 1.47.4