The document discusses Kognitio, an in-memory analytical platform for big data. It was built from the ground up to handle large, complex analytics on big data sets using a massively parallel architecture. Kognitio offers its platform both on-premises and in the cloud to provide high-performance analytics capabilities to power business insights.
The document discusses Kognitio, an in-memory analytical platform for big data. It is built from the ground up to perform large, complex analytics on big data sets using a massively parallel architecture. Kognitio offers its platform both on-premises and in the cloud to provide high-performance analytics capabilities to power business insights. It aims to complement existing data infrastructures like Hadoop and data warehouses through its scalable in-memory approach and tight integration capabilities.
Engineered Systems: Oracle's Vision for the FutureBob Rhubart
Oracle's Exadata and Exalogic are impressive products in their own right. But working in combination they deliver unparalleled transaction processing performance with up to a 30x increase over existing legacy systems, with the lowest cost of ownership over a 3 or 5 year basis than any other hardware. In this session you'll learn how to leverage Oracle's Engineered Systems within your enterprise to deliver record-breaking performance at the lowest TCO.
VMware PEX Boot Camp - The Future Now: NetApp Clustered Storage and Flash for...NetApp
Business drivers affect the performance expectations of enterprise applications. Data infrastructure must be flexible and agile to support these emerging performance and availability requirements. This session will show you how to build a data infrastructure using NetApp's flash and clustering technologies that is flexible enough to accommodate those changing demands. The session will cover how to combine NetApp's enterprise flash technology (including host-based flash, controller-based caching, hybrid disk shelves, and all-flash arrays) with NetApp's Clustered Data ONTAP to allow dynamic re-optimization of application performance, with an eye on how workload characteristics drive architectural decisions.
Integrating Novell Teaming within Your Existing InfrastructureNovell
So you've decided to implement Novell Teaming, but how do you use it to leverage your existing environment to the fullest extent? Using product demonstrations, this session will how you how to configure authentication against existing LDAP directories; how to integrate with Novell GroupWise, Exchange or other e-mail systems; and how to expose existing document stores so they can be searched and accessed through the Novell Teaming interface.
EnterpriseDB's Postgres Plus Cloud Database provides fully automated and self-healing PostgreSQL clusters in the cloud. It offers features like high availability, elastic scaling, load balancing, automatic backups, and failover. The solution is database vendor independent, supports multiple cloud platforms, and provides a simple GUI for management. It aims to make cloud databases more fully featured while reducing the operational burden on database administrators.
Clustered ONTAP adoption is growing rapidly. The document highlights Data ONTAP 8.1.1 features like Flash Pools and Infinite Volumes. It discusses how Clustered ONTAP provides a foundation for an agile IT infrastructure with benefits like non-disruptive operations, seamless scaling, and storage efficiency. Case studies show how partners like PeakColo are using Clustered ONTAP to build turnkey cloud services and reduce costs. In conclusion, Clustered ONTAP 8 is proving its value for business critical applications and enabling partners with differentiated innovation.
Hadoop clusters can be provisioned quickly and easily on virtual infrastructure using techniques like linked clones and thin provisioning. This allows Hadoop to leverage capabilities of virtualization like high availability, resource controls, and re-using spare resources. Shared storage like SAN is useful for VM images and metadata, while local disks provide scalable bandwidth for HDFS data. Virtualizing Hadoop simplifies operations and enables flexible, on-demand provisioning of Hadoop clusters.
The document discusses Kognitio, an in-memory analytical platform for big data. It is built from the ground up to perform large, complex analytics on big data sets using a massively parallel architecture. Kognitio offers its platform both on-premises and in the cloud to provide high-performance analytics capabilities to power business insights. It aims to complement existing data infrastructures like Hadoop and data warehouses through its scalable in-memory approach and tight integration capabilities.
Engineered Systems: Oracle's Vision for the FutureBob Rhubart
Oracle's Exadata and Exalogic are impressive products in their own right. But working in combination they deliver unparalleled transaction processing performance with up to a 30x increase over existing legacy systems, with the lowest cost of ownership over a 3 or 5 year basis than any other hardware. In this session you'll learn how to leverage Oracle's Engineered Systems within your enterprise to deliver record-breaking performance at the lowest TCO.
VMware PEX Boot Camp - The Future Now: NetApp Clustered Storage and Flash for...NetApp
Business drivers affect the performance expectations of enterprise applications. Data infrastructure must be flexible and agile to support these emerging performance and availability requirements. This session will show you how to build a data infrastructure using NetApp's flash and clustering technologies that is flexible enough to accommodate those changing demands. The session will cover how to combine NetApp's enterprise flash technology (including host-based flash, controller-based caching, hybrid disk shelves, and all-flash arrays) with NetApp's Clustered Data ONTAP to allow dynamic re-optimization of application performance, with an eye on how workload characteristics drive architectural decisions.
Integrating Novell Teaming within Your Existing InfrastructureNovell
So you've decided to implement Novell Teaming, but how do you use it to leverage your existing environment to the fullest extent? Using product demonstrations, this session will how you how to configure authentication against existing LDAP directories; how to integrate with Novell GroupWise, Exchange or other e-mail systems; and how to expose existing document stores so they can be searched and accessed through the Novell Teaming interface.
EnterpriseDB's Postgres Plus Cloud Database provides fully automated and self-healing PostgreSQL clusters in the cloud. It offers features like high availability, elastic scaling, load balancing, automatic backups, and failover. The solution is database vendor independent, supports multiple cloud platforms, and provides a simple GUI for management. It aims to make cloud databases more fully featured while reducing the operational burden on database administrators.
Clustered ONTAP adoption is growing rapidly. The document highlights Data ONTAP 8.1.1 features like Flash Pools and Infinite Volumes. It discusses how Clustered ONTAP provides a foundation for an agile IT infrastructure with benefits like non-disruptive operations, seamless scaling, and storage efficiency. Case studies show how partners like PeakColo are using Clustered ONTAP to build turnkey cloud services and reduce costs. In conclusion, Clustered ONTAP 8 is proving its value for business critical applications and enabling partners with differentiated innovation.
Hadoop clusters can be provisioned quickly and easily on virtual infrastructure using techniques like linked clones and thin provisioning. This allows Hadoop to leverage capabilities of virtualization like high availability, resource controls, and re-using spare resources. Shared storage like SAN is useful for VM images and metadata, while local disks provide scalable bandwidth for HDFS data. Virtualizing Hadoop simplifies operations and enables flexible, on-demand provisioning of Hadoop clusters.
Choosing the Right Storage for your Server Virtualization EnvironmentTony Pearson
The document discusses storage options for virtualized server environments. It describes the IBM Storwize V7000 disk system, which provides storage performance, utilization and productivity benefits. Key features include multi-platform support, performance optimization, built-in advanced functionality to reduce costs, and high availability and disaster recovery features. The Storwize V7000 uses thin provisioning, automated tiering, replication and data migration to improve efficiency.
SmartCloud Provisioning - servere i skyen på et splitsekund. Steen Eriksen &...IBM Danmark
IBM SmartCloud Provisioning is a cloud provisioning solution that provides highly automated, scalable, and flexible infrastructure as a service (IaaS). It allows for quick deployment of virtual machines and applications, supports multi-tenancy, and offers advanced image management capabilities. DutchCloud, an IBM partner, implemented SmartCloud Provisioning to provide their customers with on-demand, isolated cloud resources and disaster recovery capabilities with minimal administration.
The document discusses cloud computing and SKALICLOUD's cloud hosting services. It highlights that SKALICLOUD allows users to create virtual servers on demand and pay only for the resources used. A case study is presented on Malaysia's largest Islamic fund institution moving their servers to SKALICLOUD's cloud which enabled them to scale infrastructure flexibly according to seasonal demand and reduce costs.
Predstavljeni so načini, kako uporabniki rešitev SAP-a ob uporabi združene podatkovne infrastrukture NetApp dosegajo najvišje stopnje učinkovitosti in prilagodljivosti poslovanja.
Protecting Data in an Era of Content Creation – Presented by Softchoice + EMCSoftchoice Corporation
This presentation was delivered during the February and March Discovery Series events titled “Protecting Data in an Era of Content Creation” sponsored by EMC.
Storage budgets are only growing 3-4% per year, but data is growing 30-40% per year – so in five years you will have to manage roughly five times as much data, with roughly the same budget. This data is also being accessed by more people, from more places and in new ways. The end result is that some of the old-standby approaches to protecting data are no longer cost effective or manageable. This presentation looks at the strategies IT can employ to provide adequate levels of protection and availability in a very different data management environment.
If you have any questions about the content or the event, please contact @scTechEvents.
This document discusses Oracle's vision for providing optimized, integrated storage solutions through hardware and software engineered to work together. It highlights benefits like better performance, reliability, security, shorter deployment times, lower costs, and simplified management. The document also outlines challenges around explosive data growth, changing data types, and rising storage management costs. It introduces Oracle's Sun ZFS Storage Appliance as a solution that can simplify storage management, optimize efficiency, and reduce costs through the use of advanced analytics, simple data management capabilities, and breakthrough storage economics enabled by its differentiated ZFS technology.
Oracle Systems _ Jeff Schwartz _ Engineering Solutions Exadata - Exalogic.pdfInSync2011
This document provides an overview and introduction to Oracle's engineered systems, Exadata and Exalogic. It discusses the industry evolution from layered components to integrated systems. It then outlines Oracle's strategy of moving to grid computing and how engineered systems are the next logical step. Details are provided on Exalogic, an engineered system for middleware workloads. Exadata, an engineered system for database and storage workloads is also introduced. The presentation concludes by discussing how Exadata and Exalogic can be used together to provide a seamless shared application infrastructure.
This document discusses Oracle Engineered Systems and their value proposition compared to traditional IT deployments. It explains that Engineered Systems offer complete, integrated hardware and software solutions that are optimized, tested, and supported as a single system. This standardized approach simplifies deployment, maintenance, and support while improving performance, reliability, and lowering costs compared to building systems from individual components. The document provides examples of how Oracle has evolved its product offerings over 20 years towards more standardized Engineered Systems and discusses the benefits customers realize from their optimized and integrated design.
Architecting Virtualized Infrastructure for Big DataRichard McDougall
This document discusses architecting virtualized infrastructure for big data. It notes that data is growing exponentially and that the value of data now exceeds hardware costs. It advocates using virtualization to simplify and optimize big data infrastructure, enabling flexible provisioning of workloads like Hadoop, SQL, and NoSQL clusters on a unified analytics cloud platform. This platform leverages both shared and local storage to optimize performance while reducing costs.
This document discusses successfully breeding rabbits in the cloud. It describes rabbits as small public apps that want to live outside and scale quickly. It discusses how the cloud provides rabbits with global reach, fast time to market, performance, scalability, availability and cost efficiency. The document emphasizes that the weakest link limits overall scalability and that caches, queues and separating concerns can help address this. It stresses understanding cloud capacity and efficiently using resources to keep costs low. Finally, it discusses how rabbits can achieve self-service, marketing, support, education and testing at scale.
Covers the problems of achieving scalability in server farm environments and how distributed data grids provide in-memory storage and boost performance. Includes summary of ScaleOut Software product offerings including ScaleOut State Server and Grid Computing Edition.
Virtual Machines are a mainstay in the enterprise. Apache Hadoop is normally run on bare machines. This talk walks through the convergence and the use of virtual machines for running ApacheHadoop. We describe the results from various tests and benchmarks which show that the overhead of using VMs is small. This is a small price to pay for the advantages offered by virtualization. The second half of talk compares multi-tenancy with VMs versus multi-tenancy of with Hadoop`s Capacity scheduler. We follow on with a comparison of resource management in V-Sphere and the finer grained resource management and scheduling in NextGen MapReduce. NextGen MapReduce supports a general notion of a container (such as a process, jvm, virtual machine etc) in which tasks are run;. We compare the role of such first class VM support in Hadoop.
Big Data and virtualization are two of the most exciting trends in the industry today. In this session you will learn about the components of Big Data systems, and how real-time, interactive and distributed processing systems like Hadoop integrate with existing applications and databases. The combination of Big Data systems with virtualization gives Hadoop and other Big Data technologies the key benefits of cloud computing: elasticity, multi-tenancy and high availability. A new open source project that VMware will announce at the Hadoop Summit will make it easy to deploy, configure and manage Hadoop on a virtualized infrastructure. We will discuss reference architectures for key Hadoop distributions anddiscuss future directions of this new open source project.
1) Cloud platforms can support big data workloads through virtualization which provides agility, isolation, lower costs, and operational efficiency.
2) Modern networks with spine-leaf architectures are well-suited for big data by providing uniform high bandwidth connectivity. This allows for new converged and separated storage models.
3) New distributed storage solutions like HDFS, Ceph, and scale-out NAS provide much higher capacity at lower costs than traditional SAN/NAS. They also offer features like erasure coding, snapshots, cloning and geo-replication.
This document discusses Replication Server - Real Time Loading (RTL) for replicating data from a source database in real-time to Sybase IQ for analytics purposes. It provides dial-in numbers and passcode for a presentation on the topic. The presentation will cover limitations of pre-RS 15.5 replication solutions to IQ, an overview of RTL, and the new RTL update capabilities in RS.
Windows Azure and the cloud: What it’s all aboutMaarten Balliauw
This document provides an overview of Windows Azure and the benefits of cloud computing. It discusses how cloud computing addresses inefficiencies in traditional IT by allowing resources to scale up and down as needed. It outlines some "instant wins" possible with cloud, such as reducing inactive compute time and enabling burst scenarios. The document then covers the main services available on Windows Azure, including Infrastructure as a Service (IaaS), Platform as a Service (PaaS), and Software as a Service (SaaS). It provides examples of how companies have used Windows Azure and encourages readers to get started with Windows Azure.
Introduction to Gruter and Gruter's BigData PlatformGruter
Gruter specializes in helping companies develop successful Big Data environments by designing carefully-modeled best-fit data platform solutions. Gruter's expertise extends across the full data life cycle, ensuring prescient architecture, robust build, timely deployment and simple operation and maintenance. Through a spirit of partnership and collaboration, Gruter provides its clients with the tools, know-how and support needed to put Big Data to work for immediate bottom-line outcomes.
EMC VSPEX BLUE is an all-in-one Hyper-Converged Infrastructure Appliance powered by Intel processor technology and VMware EVO:RAIL software.
It simplifies and automates deployment, provides and intuitive management dashboard that embeds the VSPEX BLUE Manager to simplify operations, upgrades and patches.
With a software designed building block approach, capacity and performance scale linearly – eliminating the need for pre-planned infrastructure purchases and reducing your upfront investments.
All wrapped with a single point of global support from EMC for both hardware and software
The document discusses virtualizing Hadoop clusters on VMware vSphere. It describes how Hadoop enables parallel processing of large datasets across clusters using MapReduce. Virtualizing Hadoop provides benefits like simple operations, high availability, and elastic scaling. The document outlines challenges with using Hadoop and how virtualization addresses them. It provides examples of deploying Hadoop clusters on Serengeti and configuring different distributions. Performance results show little overhead from virtualization and benefits of local storage. Joint engineering with Hortonworks adds high availability to Hadoop master daemons using vSphere features.
This document appears to be a Gantt chart created by Lauren Morgan. A Gantt chart is a type of bar chart that displays a project schedule and tracks project progress over time. The chart likely shows the tasks, milestones, and deadlines for a project being managed by Lauren Morgan.
The document discusses feedback from a focus group on various design elements for magazine covers, contents pages, and double page spreads. For the cover designs, they preferred the first design with some minor adjustments. For the contents pages, they favored the second design for its fun layout. And for the double page spreads, they liked the second design the most if the noted changes were made to address crowding issues.
Choosing the Right Storage for your Server Virtualization EnvironmentTony Pearson
The document discusses storage options for virtualized server environments. It describes the IBM Storwize V7000 disk system, which provides storage performance, utilization and productivity benefits. Key features include multi-platform support, performance optimization, built-in advanced functionality to reduce costs, and high availability and disaster recovery features. The Storwize V7000 uses thin provisioning, automated tiering, replication and data migration to improve efficiency.
SmartCloud Provisioning - servere i skyen på et splitsekund. Steen Eriksen &...IBM Danmark
IBM SmartCloud Provisioning is a cloud provisioning solution that provides highly automated, scalable, and flexible infrastructure as a service (IaaS). It allows for quick deployment of virtual machines and applications, supports multi-tenancy, and offers advanced image management capabilities. DutchCloud, an IBM partner, implemented SmartCloud Provisioning to provide their customers with on-demand, isolated cloud resources and disaster recovery capabilities with minimal administration.
The document discusses cloud computing and SKALICLOUD's cloud hosting services. It highlights that SKALICLOUD allows users to create virtual servers on demand and pay only for the resources used. A case study is presented on Malaysia's largest Islamic fund institution moving their servers to SKALICLOUD's cloud which enabled them to scale infrastructure flexibly according to seasonal demand and reduce costs.
Predstavljeni so načini, kako uporabniki rešitev SAP-a ob uporabi združene podatkovne infrastrukture NetApp dosegajo najvišje stopnje učinkovitosti in prilagodljivosti poslovanja.
Protecting Data in an Era of Content Creation – Presented by Softchoice + EMCSoftchoice Corporation
This presentation was delivered during the February and March Discovery Series events titled “Protecting Data in an Era of Content Creation” sponsored by EMC.
Storage budgets are only growing 3-4% per year, but data is growing 30-40% per year – so in five years you will have to manage roughly five times as much data, with roughly the same budget. This data is also being accessed by more people, from more places and in new ways. The end result is that some of the old-standby approaches to protecting data are no longer cost effective or manageable. This presentation looks at the strategies IT can employ to provide adequate levels of protection and availability in a very different data management environment.
If you have any questions about the content or the event, please contact @scTechEvents.
This document discusses Oracle's vision for providing optimized, integrated storage solutions through hardware and software engineered to work together. It highlights benefits like better performance, reliability, security, shorter deployment times, lower costs, and simplified management. The document also outlines challenges around explosive data growth, changing data types, and rising storage management costs. It introduces Oracle's Sun ZFS Storage Appliance as a solution that can simplify storage management, optimize efficiency, and reduce costs through the use of advanced analytics, simple data management capabilities, and breakthrough storage economics enabled by its differentiated ZFS technology.
Oracle Systems _ Jeff Schwartz _ Engineering Solutions Exadata - Exalogic.pdfInSync2011
This document provides an overview and introduction to Oracle's engineered systems, Exadata and Exalogic. It discusses the industry evolution from layered components to integrated systems. It then outlines Oracle's strategy of moving to grid computing and how engineered systems are the next logical step. Details are provided on Exalogic, an engineered system for middleware workloads. Exadata, an engineered system for database and storage workloads is also introduced. The presentation concludes by discussing how Exadata and Exalogic can be used together to provide a seamless shared application infrastructure.
This document discusses Oracle Engineered Systems and their value proposition compared to traditional IT deployments. It explains that Engineered Systems offer complete, integrated hardware and software solutions that are optimized, tested, and supported as a single system. This standardized approach simplifies deployment, maintenance, and support while improving performance, reliability, and lowering costs compared to building systems from individual components. The document provides examples of how Oracle has evolved its product offerings over 20 years towards more standardized Engineered Systems and discusses the benefits customers realize from their optimized and integrated design.
Architecting Virtualized Infrastructure for Big DataRichard McDougall
This document discusses architecting virtualized infrastructure for big data. It notes that data is growing exponentially and that the value of data now exceeds hardware costs. It advocates using virtualization to simplify and optimize big data infrastructure, enabling flexible provisioning of workloads like Hadoop, SQL, and NoSQL clusters on a unified analytics cloud platform. This platform leverages both shared and local storage to optimize performance while reducing costs.
This document discusses successfully breeding rabbits in the cloud. It describes rabbits as small public apps that want to live outside and scale quickly. It discusses how the cloud provides rabbits with global reach, fast time to market, performance, scalability, availability and cost efficiency. The document emphasizes that the weakest link limits overall scalability and that caches, queues and separating concerns can help address this. It stresses understanding cloud capacity and efficiently using resources to keep costs low. Finally, it discusses how rabbits can achieve self-service, marketing, support, education and testing at scale.
Covers the problems of achieving scalability in server farm environments and how distributed data grids provide in-memory storage and boost performance. Includes summary of ScaleOut Software product offerings including ScaleOut State Server and Grid Computing Edition.
Virtual Machines are a mainstay in the enterprise. Apache Hadoop is normally run on bare machines. This talk walks through the convergence and the use of virtual machines for running ApacheHadoop. We describe the results from various tests and benchmarks which show that the overhead of using VMs is small. This is a small price to pay for the advantages offered by virtualization. The second half of talk compares multi-tenancy with VMs versus multi-tenancy of with Hadoop`s Capacity scheduler. We follow on with a comparison of resource management in V-Sphere and the finer grained resource management and scheduling in NextGen MapReduce. NextGen MapReduce supports a general notion of a container (such as a process, jvm, virtual machine etc) in which tasks are run;. We compare the role of such first class VM support in Hadoop.
Big Data and virtualization are two of the most exciting trends in the industry today. In this session you will learn about the components of Big Data systems, and how real-time, interactive and distributed processing systems like Hadoop integrate with existing applications and databases. The combination of Big Data systems with virtualization gives Hadoop and other Big Data technologies the key benefits of cloud computing: elasticity, multi-tenancy and high availability. A new open source project that VMware will announce at the Hadoop Summit will make it easy to deploy, configure and manage Hadoop on a virtualized infrastructure. We will discuss reference architectures for key Hadoop distributions anddiscuss future directions of this new open source project.
1) Cloud platforms can support big data workloads through virtualization which provides agility, isolation, lower costs, and operational efficiency.
2) Modern networks with spine-leaf architectures are well-suited for big data by providing uniform high bandwidth connectivity. This allows for new converged and separated storage models.
3) New distributed storage solutions like HDFS, Ceph, and scale-out NAS provide much higher capacity at lower costs than traditional SAN/NAS. They also offer features like erasure coding, snapshots, cloning and geo-replication.
This document discusses Replication Server - Real Time Loading (RTL) for replicating data from a source database in real-time to Sybase IQ for analytics purposes. It provides dial-in numbers and passcode for a presentation on the topic. The presentation will cover limitations of pre-RS 15.5 replication solutions to IQ, an overview of RTL, and the new RTL update capabilities in RS.
Windows Azure and the cloud: What it’s all aboutMaarten Balliauw
This document provides an overview of Windows Azure and the benefits of cloud computing. It discusses how cloud computing addresses inefficiencies in traditional IT by allowing resources to scale up and down as needed. It outlines some "instant wins" possible with cloud, such as reducing inactive compute time and enabling burst scenarios. The document then covers the main services available on Windows Azure, including Infrastructure as a Service (IaaS), Platform as a Service (PaaS), and Software as a Service (SaaS). It provides examples of how companies have used Windows Azure and encourages readers to get started with Windows Azure.
Introduction to Gruter and Gruter's BigData PlatformGruter
Gruter specializes in helping companies develop successful Big Data environments by designing carefully-modeled best-fit data platform solutions. Gruter's expertise extends across the full data life cycle, ensuring prescient architecture, robust build, timely deployment and simple operation and maintenance. Through a spirit of partnership and collaboration, Gruter provides its clients with the tools, know-how and support needed to put Big Data to work for immediate bottom-line outcomes.
EMC VSPEX BLUE is an all-in-one Hyper-Converged Infrastructure Appliance powered by Intel processor technology and VMware EVO:RAIL software.
It simplifies and automates deployment, provides and intuitive management dashboard that embeds the VSPEX BLUE Manager to simplify operations, upgrades and patches.
With a software designed building block approach, capacity and performance scale linearly – eliminating the need for pre-planned infrastructure purchases and reducing your upfront investments.
All wrapped with a single point of global support from EMC for both hardware and software
The document discusses virtualizing Hadoop clusters on VMware vSphere. It describes how Hadoop enables parallel processing of large datasets across clusters using MapReduce. Virtualizing Hadoop provides benefits like simple operations, high availability, and elastic scaling. The document outlines challenges with using Hadoop and how virtualization addresses them. It provides examples of deploying Hadoop clusters on Serengeti and configuring different distributions. Performance results show little overhead from virtualization and benefits of local storage. Joint engineering with Hortonworks adds high availability to Hadoop master daemons using vSphere features.
This document appears to be a Gantt chart created by Lauren Morgan. A Gantt chart is a type of bar chart that displays a project schedule and tracks project progress over time. The chart likely shows the tasks, milestones, and deadlines for a project being managed by Lauren Morgan.
The document discusses feedback from a focus group on various design elements for magazine covers, contents pages, and double page spreads. For the cover designs, they preferred the first design with some minor adjustments. For the contents pages, they favored the second design for its fun layout. And for the double page spreads, they liked the second design the most if the noted changes were made to address crowding issues.
Lauren Morgan has chosen the song "To Build a Home" by The Cinematic Orchestra for her music video. The song is about loss and reminiscing on past love and relationships. She wants to create an emotional storyline that engages audiences.
She will dress her model casually in a loose shirt and trainers, and include flashbacks where the model wears a shirt saying "LOVE" to differentiate scenes.
One location will be an empty field to emphasize loneliness, and another will be a house where the model writes a note to her deceased partner.
The video will be filmed in black and white for an emotional feel, and will include contemporary dance to express heartbreak through movement. Feedback on
www.sandwichbaron.co.za bring excellent and exclusive ranges of platter varying from anytime platter, Baron party platter, Baroness’s Sandwich platter, Breakfast Platter, Chicken platter, Cocktail platter, cold meat fantasy platter and more.
This document discusses different types of bullying, including physical bullying like punching and kicking as well as psychological bullying like ignoring someone or demanding money. It notes that bullying can occur through physical or online means. The targets of bullying are often those seen as unusual, gawky, or of a different race. Bullying is enabled when perpetrators, crowds, teachers, and sideliners collude together and when schools lack clear rules against it. Solutions to bullying require awareness and prevention efforts.
The survey results showed that the majority of respondents were females between 16-25 years old, which matches the target audience. Most respondents expected to see dancing and a storyline in vocal music videos. All respondents preferred that music videos include a narrative and be filmed in multiple locations.
Ema kognitio comparative analysis webinar slidesKognitio
The document summarizes a web seminar that compares in-memory database management systems (DBMS) from Kognitio, SAP HANA, and Oracle TimesTen. It discusses an analysis by Enterprise Management Associates that evaluated the platforms on their implementation infrastructure, ability to scale, security features, and support. The seminar then provides an overview of Kognitio's analytical platform and how its experience with in-memory technology can power business applications and analytics for large datasets.
The document summarizes Lauren Morgan's editing process for a music video and album packaging. It describes renaming video files for easier navigation, using subtle transitions like clip dissolves and fades to black/white to link scenes and reflect the song's mood. For the album packaging, Morgan adjusted colors, contrast and fonts on cover images, added a black background and text layouts. Information on the band was also included for context.
Accelerating micro strategy for real time biKognitio
The web seminar discussed how Kognitio can accelerate MicroStrategy for real-time business intelligence. Kognitio demonstrated its in-memory analytical platform running MicroStrategy reports and dashboards on large datasets. Kognitio enables flexible, universal access to more data sources for MicroStrategy users and can scale to handle big data workloads.
Android is an open-source, Linux-based operating system designed primarily for smartphones and tablets. Initially created by Android Inc., which was later acquired by Google, Android was unveiled in 2007. It has the largest worldwide market share of any mobile operating system. Key aspects include being open-source, having a large developer community creating applications, and allowing device manufacturers to customize Android for their devices.
This document discusses reliable income investments in the current market environment. It summarizes LM Investment's Australian income funds, which have historically achieved returns of 6-10% annually through investments in high-quality Australian debt securities. LM believes Australia offers a resilient economy and highly diversified market, with strong population growth, affordable housing, and supportive economic conditions that could support continued property price increases. However, LM focuses on investing in specific local markets rather than national averages to access reliable income from Australian investments.
Forget about advertising what you are selling. Focus on what your audience needs and create meaningful experiences that relate to the things they care about.
Presented at Toolbox Conference 2015, in Thessaloniki, Greece.
DevOps Days Tel Aviv 2013: Re-Culturing a 200 employee start-up - David Virts...DevOpsDays Tel Aviv
In this session, Dvir and David will present how eToro, the world's largest Social Investment Network, embarked on the DevOps journey. A year ago, they had a great product, market and vision, but no automation in place, marginal contribution from new developers, and scalability and stability problems. Today, they are in the early phases of implementing devops practices within the company, which is a work in progress. Employees are already a lot more knowledgeable, and there are a few preliminary Success stories they can share. Come and hear how to start a journey whose vision people cannot even understand, and how to make them part of this vision.
Speaker:
Dvir Greenberg and David Virtser, eToro
Dvir serves as the VP of product operations at eToro and David as the devops leader at eToro, and together they're pushing the devops revolution within eToro and trying to optimize it, make it faster and more scalable.
David can be followed on Twitter at @poison_dv
Estimating the Total Costs of Your Cloud Analytics PlatformDATAVERSITY
Organizations today need a broad set of enterprise data cloud services with key data functionality to modernize applications and utilize machine learning. They need a platform designed to address multi-faceted needs by offering multi-function Data Management and analytics to solve the enterprise’s most pressing data and analytic challenges in a streamlined fashion. They need a worry-free experience with the architecture and its components.
IaaS provides on-demand, self-service access to computing resources like servers and storage. PaaS automates the deployment of applications on top of IaaS and handles scaling. SaaS delivers applications to users through a thin client like a web browser. iPaaS facilitates integration between SaaS, PaaS, IaaS, and on-premise systems through a cloud-based platform. Popular IaaS include OpenStack and VMware vSphere, PaaS include Cloud Foundry and OpenShift, while Salesforce and Office 365 are examples of SaaS.
This document summarizes a web seminar about Kognitio Cloud on Amazon Web Services. The presentation will include a brief overview of Kognitio and Kognitio Cloud, a demonstration of deploying it on AWS, and a Q&A session. It will be hosted by Michael Hiskey, VP of Marketing at Kognitio, and the key presenter will be Ian Bird, VP of Cloud Solutions at Kognitio.
Kognitio provides an in-memory analytical platform that loads large datasets entirely into RAM, allowing clients to perform complex analytics on datasets as large as tens of billions of records in just tens of seconds. The platform utilizes massively parallel processing across commodity servers to scale linearly with no single points of failure. Kognitio works with clients across industries to power solutions for media analytics, customer loyalty programs, online travel, mobile advertising and more.
Presentation: Overview of Kognitio, Kognitio Cloud and the Kognitio Analytical Platform
Kognitio is driving the convergence of Big Data, in-memory analytics and cloud computing. Having delivered the first in-memory analytical platform in 1989, it was designed from the ground up to provide the highest amount of scalable compute power to allow rapid execution of complex analytical queries without the administrative overhead of manipulating data. Kognitio software runs on industry-standard x86 servers, or as an appliance, or in Kognitio Cloud, a ready-to-use analytical platform. Kognitio Cloud is a secure, private or public cloud Platform-as-a-Service (PaaS), leveraging the cloud computing model to make the Kognitio Analytical Platform available on a subscription basis. Clients span industries, including market research, consumer packaged goods, retail, telecommunications, financial services, insurance, gaming, media and utilities.
To learn more, visit www.kognitio.com and follow us on Facebook, LinkedIn and Twitter.
Software-defined storage (SDS) provides storage independent of underlying hardware through abstraction, automation, and policy-driven provisioning. It can help reduce costs by using commodity hardware and reusing existing resources. While SDS offers benefits like flexibility, efficiency, and delivering storage as a service, there are also challenges to consider like lack of vendor testing for all hardware combinations and difficulty gauging performance. Whether SDS makes sense depends on individual use cases, such as for remote/small offices, scale-out storage, or hyper-convergence. Overall, SDS is a real concept that is already in use today across the industry.
Lessons learned from embedding Cassandra in xPatternsClaudiu Barbura
The document discusses lessons learned from embedding Cassandra in the xPatterns big data analytics platform. It provides an agenda that includes discussing Cassandra usage in xPatterns, the necessary developments like data modeling optimizations, robust REST APIs, geo-replication, and a demo of exporting to NoSQL APIs. Key lessons learned since Cassandra versions 0.6 to 2.0.6 are also summarized, such as the need for consistent clocks, reducing column families, and monitoring.
HPC and cloud distributed computing, as a journeyPeter Clapham
Introducing an internal cloud brings new paradigms, tools and infrastructure management. When placed alongside traditional HPC the new opportunities are significant But getting to the new world with micro-services, autoscaling and autodialing is a journey that cannot be achieved in a single step.
This document provides an overview of OpenStack Block Storage (Cinder) and how it addresses challenges of scaling virtual environments. It discusses how virtualization led to cloud computing with goals of abstraction, automation, and scale. OpenStack was created as open source software to build and manage clouds with common APIs. Cinder provides block storage volumes to OpenStack instances, managing creation and attachment. SolidFire's storage system offers comprehensive Cinder support with guaranteed performance, high availability, and scale for production use.
Data Engineer, Patterns & Architecture The future: Deep-dive into Microservic...Igor De Souza
With Industry 4.0, several technologies are used to have data analysis in real-time, maintaining, organizing, and building this on the other hand is a complex and complicated job. Over the past 30 years, we saw several ideas to centralize the database in a single place as the united and true source of data has been implemented in companies, such as Data wareHouse, NoSQL, Data Lake, Lambda & Kappa Architecture.
On the other hand, Software Engineering has been applying ideas to separate applications to facilitate and improve application performance, such as microservices.
The idea is to use the MicroService patterns on the date and divide the model into several smaller ones. And a good way to split it up is to use the model using the DDD principles. And that's how I try to explain and define DataMesh & Data Fabric.
This document discusses big data analytics tools and technologies. It begins with an overview of big data challenges and available tools. It then discusses Packetloop, a company that provides big data security analytics using tools like Amazon EMR, Cassandra, and PostgreSQL on AWS. Next, it discusses how EMR and Redshift from AWS can be used as big data tools for tasks like batch processing, data warehousing, and live analytics. It concludes by discussing how Intel technologies can help power big data platforms by providing optimized processors, networking, and storage to enable analytics at scale.
SplunkLive! Nutanix Session - Turnkey and scalable infrastructure for Splunk ...Splunk
Nutanix provides a turnkey and scalable infrastructure for Splunk in 3 sentences:
1) The Nutanix solution uses SSD and a scale-out datacenter appliance to address Splunk's IO intensity and provide faster time to value.
2) It employs a scale-out cluster to eliminate server sprawl and simplify adding more data sources.
3) The converged and software-defined Nutanix platform virtualizes Splunk for enterprise features while improving performance, capacity, and manageability over direct deployment.
Presentazione dello speech tenuto da Carmine Spagnuolo (Postdoctoral Research Fellow - Università degli Studi di Salerno/ ACT OR) dal titolo "Technology insights: Decision Science Platform", durante il Decision Science Forum 2019, il più importante evento italiano sulla Scienza delle Decisioni.
A well-architected cloud provides a stable IT environment that offers easy access to needed resources, usage-based expenses, extra capacity on demand, disaster recovery, and a secure environment, but a well-architected cloud does not magically build itself. It requires careful consideration of a multitude of factors, both technical and non-technical. There is no single architecture that is "right" for an OpenStack cloud deployment. OpenStack can be used for any number of different purposes, and each of them has its own particular requirements and architectural peculiarities. The use cases covered in this talk include:
• General purpose: A cloud built with common components that should address 80% of common use cases.
• Compute focused: A cloud designed to address compute intensive workloads such as high performance computing (HPC).
• Storage focused: A cloud focused on storage intensive workloads such as data
analytics with parallel file systems.
• Network focused: A cloud depending on high performance and reliable networking, such as a content delivery network (CDN).
Microservice message routing on KubernetesFrans van Buul
Slides related to a presentation done at GOTO Amsterdam in June 2018. How to split a given application into a microservices system, considerations regarding message routing between those microservices, and how to deploy everything: using the Axon stack, and running on Kubernetes
Interop Las Vegas Cloud Connect Summit 2014 - Software Defined Data CenterScott Carlson
Presentation materials from 2014 Interop Conference - Cloud Connect Summit - Scott Carlson from PayPal in Las Vegas
Audio: https://www.youtube.com/watch?v=tyYGupLg7IE
Building a PaaS Platform like Bluemix on OpenStackAnimesh Singh
The document discusses building IBM Bluemix on OpenStack using IBM Cloud Manager. Key points include:
- Bluemix is IBM's Platform as a Service offering that allows developers to focus on code by providing integrated services and tools.
- IBM Cloud Manager with OpenStack extends OpenStack to manage heterogeneous environments and simplify deployment. It will be used to deploy Bluemix on OpenStack.
- BOSH will be used for deployment and lifecycle management of Bluemix on OpenStack. It leverages OpenStack APIs to deploy VMs from stemcells and manage the health of processes and VMs.
Verizon's Beth Cohen explains the process of creating the OpenStack Architecture Guide, as delivered to the Boston OpenStack Meetup September 10, 2014.
Removing Uninteresting Bytes in Software FuzzingAftab Hussain
Imagine a world where software fuzzing, the process of mutating bytes in test seeds to uncover hidden and erroneous program behaviors, becomes faster and more effective. A lot depends on the initial seeds, which can significantly dictate the trajectory of a fuzzing campaign, particularly in terms of how long it takes to uncover interesting behaviour in your code. We introduce DIAR, a technique designed to speedup fuzzing campaigns by pinpointing and eliminating those uninteresting bytes in the seeds. Picture this: instead of wasting valuable resources on meaningless mutations in large, bloated seeds, DIAR removes the unnecessary bytes, streamlining the entire process.
In this work, we equipped AFL, a popular fuzzer, with DIAR and examined two critical Linux libraries -- Libxml's xmllint, a tool for parsing xml documents, and Binutil's readelf, an essential debugging and security analysis command-line tool used to display detailed information about ELF (Executable and Linkable Format). Our preliminary results show that AFL+DIAR does not only discover new paths more quickly but also achieves higher coverage overall. This work thus showcases how starting with lean and optimized seeds can lead to faster, more comprehensive fuzzing campaigns -- and DIAR helps you find such seeds.
- These are slides of the talk given at IEEE International Conference on Software Testing Verification and Validation Workshop, ICSTW 2022.
Maruthi Prithivirajan, Head of ASEAN & IN Solution Architecture, Neo4j
Get an inside look at the latest Neo4j innovations that enable relationship-driven intelligence at scale. Learn more about the newest cloud integrations and product enhancements that make Neo4j an essential choice for developers building apps with interconnected data and generative AI.
Full-RAG: A modern architecture for hyper-personalizationZilliz
Mike Del Balso, CEO & Co-Founder at Tecton, presents "Full RAG," a novel approach to AI recommendation systems, aiming to push beyond the limitations of traditional models through a deep integration of contextual insights and real-time data, leveraging the Retrieval-Augmented Generation architecture. This talk will outline Full RAG's potential to significantly enhance personalization, address engineering challenges such as data management and model training, and introduce data enrichment with reranking as a key solution. Attendees will gain crucial insights into the importance of hyperpersonalization in AI, the capabilities of Full RAG for advanced personalization, and strategies for managing complex data integrations for deploying cutting-edge AI solutions.
zkStudyClub - Reef: Fast Succinct Non-Interactive Zero-Knowledge Regex ProofsAlex Pruden
This paper presents Reef, a system for generating publicly verifiable succinct non-interactive zero-knowledge proofs that a committed document matches or does not match a regular expression. We describe applications such as proving the strength of passwords, the provenance of email despite redactions, the validity of oblivious DNS queries, and the existence of mutations in DNA. Reef supports the Perl Compatible Regular Expression syntax, including wildcards, alternation, ranges, capture groups, Kleene star, negations, and lookarounds. Reef introduces a new type of automata, Skipping Alternating Finite Automata (SAFA), that skips irrelevant parts of a document when producing proofs without undermining soundness, and instantiates SAFA with a lookup argument. Our experimental evaluation confirms that Reef can generate proofs for documents with 32M characters; the proofs are small and cheap to verify (under a second).
Paper: https://eprint.iacr.org/2023/1886
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
UiPath Test Automation using UiPath Test Suite series, part 5DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 5. In this session, we will cover CI/CD with devops.
Topics covered:
CI/CD with in UiPath
End-to-end overview of CI/CD pipeline with Azure devops
Speaker:
Lyndsey Byblow, Test Suite Sales Engineer @ UiPath, Inc.
Threats to mobile devices are more prevalent and increasing in scope and complexity. Users of mobile devices desire to take full advantage of the features
available on those devices, but many of the features provide convenience and capability but sacrifice security. This best practices guide outlines steps the users can take to better protect personal devices and information.
Pushing the limits of ePRTC: 100ns holdover for 100 daysAdtran
At WSTS 2024, Alon Stern explored the topic of parametric holdover and explained how recent research findings can be implemented in real-world PNT networks to achieve 100 nanoseconds of accuracy for up to 100 days.
Goodbye Windows 11: Make Way for Nitrux Linux 3.5.0!SOFTTECHHUB
As the digital landscape continually evolves, operating systems play a critical role in shaping user experiences and productivity. The launch of Nitrux Linux 3.5.0 marks a significant milestone, offering a robust alternative to traditional systems such as Windows 11. This article delves into the essence of Nitrux Linux 3.5.0, exploring its unique features, advantages, and how it stands as a compelling choice for both casual users and tech enthusiasts.
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Essentials of Automations: The Art of Triggers and Actions in FMESafe Software
In this second installment of our Essentials of Automations webinar series, we’ll explore the landscape of triggers and actions, guiding you through the nuances of authoring and adapting workspaces for seamless automations. Gain an understanding of the full spectrum of triggers and actions available in FME, empowering you to enhance your workspaces for efficient automation.
We’ll kick things off by showcasing the most commonly used event-based triggers, introducing you to various automation workflows like manual triggers, schedules, directory watchers, and more. Plus, see how these elements play out in real scenarios.
Whether you’re tweaking your current setup or building from the ground up, this session will arm you with the tools and insights needed to transform your FME usage into a powerhouse of productivity. Join us to discover effective strategies that simplify complex processes, enhancing your productivity and transforming your data management practices with FME. Let’s turn complexity into clarity and make your workspaces work wonders!
2. Kognitio is an
in-memory analytical platform
Built from the ground-up to satisfy large and
complex analytics on big data sets
A massively parallel, in-memory analytical
engine that interoperates with your existing
infrastructure
3. Kognitio
Kognitio is focused on providing the premier high-
performance analytical platform to power business
insight around the world.
•Privately held
•Dev Labs in the UK
•Leadership in US
•~100 employees
Core product:
•MPP in‐memory
analytical platform
•Built from the
ground‐up to satisfy
large and complex
analytics on big data
sets
5. The Kognitio Analytical Platform
• Why an “analytical platform”?
– In the burgeoning “big data” ecosystem, the volume, velocity and
variety of data require a new approach
• Disaggregation of persistent data storage and analytics
• Variety of BI Tools (MicroStrategy, Tableau, MS Excel, etc.)
• Introduce a new tier to accelerate, govern and increase flexibility
– Complement to Hadoop, EDWs, etc.
• MPP in-memory structure enables fast ad-hoc reporting
• Standard SQL, MDX, etc. to make Hadoop easy, consumable
• Tight integration enables an “information anywhere” approach
7. What is an “In-memory” Analytical Platform?
• A database where all of the data of interest or specific portions of the
data have been permanently pre-loaded into a computers random
access memory (RAM).
• Not a large cache
– Data is held in structures that take advantage of the properties of
RAM – NOT copies of frequently used disk blocks
– The databases query optimiser knows at all times exactly which
data is in memory and which is not
8. Kognitio Analytical Platform
• A high performance in-memory analytical platform that
doesn’t require specialized servers
• Software
– quick simple deployment on commodity hardware or Cloud
• Scalable
– Linear scale-out through best of breed parallelism
• Powerful
– Unrivalled MPP analytical performance
– Harnesses all CPU cores made available
• Low TCO
– Linux, commodity hardware, no special hardware needs
– SQL relational core familiar to most DBAs
9. For Analytics, the CPU is King
• The key metric of any analytical platform should be GB/CPU
– It needs to effectively utilize all available cores
– Hyper threads are NOT the equivalent of cores
• Interactive/adhoc analytics:
– THINK data to core ratios ≈ 10GB data per CPU core
• Every cycle is precious – CPU cores need to used efficiently
– Techniques such as “dynamic machine code generation”
Careful – performance impact of compression:
Makes disk-based databases go faster
Makes in-memory databases go slower
10. Speed & Scale from “True MPP”
• Memory & CPU on an individual server = NOWHERE near enough for big data
– Moore’s Law – The power of a processor doubles every two years
– Data volumes – Double every year!!
• The only way to keep up is to parallelise or scale-out
• Combine the RAM of many individual servers
• many CPU cores spread across
Many • many CPUs, housed in
• many individual computers (1 to 1000+)
– Data is split across all the CPU cores
– All database operations are parallelised with no points of serialisation –
This is true MPP
• Every CPU core in
Every • Every server needs to efficiently involved in
• Every query
11. Free to use - Get started now
Try it now: http://www.kognitio.com/free
12. Kognitio Cloud
Kognitio Cloud is a ready-to-use analytical platform. A
secure Platform-as-a-Service (PaaS) available as either a
Private or Public Cloud, it leverages the cloud computing
model to make the Kognitio Analytical Platform available
on a subscription basis.
PRIVATE CLOUD PUBLIC CLOUD
• Could be referred to as an “exclusive” hybrid cloud offering • Ready-to-use in-memory analytical platform leveraging Amazon
Web Services (AWS) Elastic Cloud Computing (EC2) infrastructure
• Kognitio was the first to offer “Data-warehousing-as-a-Service”
(DaaS) in 1993, managed services hosted solution model • Based on hourly usage per CPU/server and TB of data
• Designed for clients who require a secure, dedicated • Suitable for use cases with unpredictable usage patterns
environment without the skills requirement and capital overhead
• Automatically provisioning in minutes with pre-installed servers
associated with traditional, in-house analytical implementations
• Elastic scalability (up and down) to meet compute demand
Cloud model enables multiple advantages
• Attractive to Line-of-Business functions
Fast execution • No software or hardware to buy, install, maintain or upgrade
/ time-to-value • Analysis projects can be brought to life quickly and easily
• PaaS model eliminates setup, maintenance and servicing
Flexibility • Enabling delivery of complex analytics to business users
• “sandbox” environment for development and testing
• Avoid CapEx with only OpEx charges based on
usage/subscription level
Lower costs • Support and maintenance amortization across relevant contract
periods
13. Analytics from the business user-down
Business
User
1. Understand the business problem
2. Define the requirements
• Forecast ROIs and interation Business
Analyst
3. Perform a Kognitio Cloud Assessment
4. Execute a cloud agreement with Kognitio
*
Not Adjusted
9 Month Total 2011 2010
5. Build the application
2011 2010 Sep.3 Aug. Jul. Sep. Aug.
3,443,873 8.1 382,009 401,951 391,878 351,696 369,199
617,194 10.4 67,055 71,725 69,801 61,676 66,085
65,237 1.0 7,671 7,892 7,422 7,357 7,611
70,324 0.0 7,737 8,240 7,888 7,685 8,082
226,261 5.8 24,764 26,196 25,973 23,288 23,722
455,276 5.6 50,418 52,164 53,062 47,710 48,597
446,918 3.5 48,368 51,797 51,160 46,166 49,848
88,590 8.7 10,510 10,681 10,258 9,591 9,514
279,985 13.2 31,390 31,889 28,478 28,266 28,282
6. Test and deploy the solution
368,372 5.5 41,188 42,244 43,097 37,992 40,228
7. Ongoing development & improvement
Enables the Business:
• Fast integration and time‐to‐value
• Iterative “Sandbox” approach IT
• Reduced risk
14. Deploy with other technologies on AWS
• One click to launch!
• Automatic deployment of Kognitio and BI
tools on Amazon Web Services
• Self-Service BI NeutrinoBI at
nbi.kognitiocloud.com
• Pre-loaded ready sample data in the
cloud for use and demonstration
• Multi-node and single server self-paced
demonstrations
• Videos, instructional information
• Kognitio Community forum on LinkedIn
15. Public Cloud multi-node via CloudFormation
• Kognitio configured as a multi-node deployment
• Available as a trial platform on-demand
• kognitio.kognitiocloud.com
• Few steps to deployment
16. New! Kognitio version 8:
Enabling and extending the Analytical Platform
General Availability:
June 2013
External Functions
Not Only SQL
External Tables
Kognitio Storage
as an External table
Hadoop Connector Other Connectors
17. Kognitio Hadoop Integration
• Developed in co-operation with Sears (Metascale)
• More than just a connector – tight integration
– Hadoop does what it is good at – filtering data
– Kognitio does what it is good at – complex analytics
Create view image “name” as select “field1, field2” from Near-line
“table” where date > 1/1/12 Storage
(optional)
Select
Merchant_Group,
to_char(Num_Accounts,'999,999') Num_Accounts, Give me field1, field 2 from “file” where
to_char(Num_Transactions, '999,999,999') Num_Trans,
date > 1/1/12
Data
to_char(cast(Total_spend as dec(15,2)), '999,999,999') || ' K' otal_Spend_K
from
(select MG.GroupDesc Merchant_Group, count(distinct Account_ID) as Num_Accounts,
count(*) as Num_Transactions, sum(Transaction_Amount) as Total_Spend from
demo_fs.V_Fin_CC_Trans T, demo_fs.V_Fin_Merchant M, demo_fs.V_Fin_Merch_Group MG
where T.Merchant_Category = M.CategoryNo and M.GroupNo=MG.GroupNo and
upper(Location) in (select distinct upper(Town) from
demo_fs.V_Fin_Postcodes where upper(Town) like '%LOW%')
group by MG.GroupDesc ) SQ1
order by Num_Accounts desc;
Hadoop Cluster
18. Kognitio Hadoop Connectors
HDFS Connector – fast load of complete files
• Connector defines access to HDFS file system
• External table accesses row-based data
in HDFS
• Dynamic access or “pin” data into memory
• Complete HDFS file is loaded into memory
• Data filtering requires data to be partitioned into
different files within Hadoop
Map Reduce Connector – filter from large files
• Connector uploads agent to Hadoop nodes
• Query passes selections and relevant
predicates to agent
• Data filtering and projection takes place locally
on each Hadoop node
• Only data of interest is loaded into memory via
parallel load streams
• Data can be filtered within a file
19. Not Only SQL
Kognitio External Scripts
– Run third party binaries or scripts embedded within SQL
• Flexible framework to pass data to/from any executable or interpreter
• Full MPP execution of Perl, Python, Java, R, SAS, etc.
• Any number of rows in/out, partitioning controls
20. Not Only SQL: any language in-line
Kognitio External Scripts
– Run third party binaries or scripts embedded within SQL
• Perl, Python, Java, R, SAS, etc.
• One-to-many rows in, zero-to-many rows out, one to one
create interpreter perlinterp
command '/usr/bin/perl' sends 'csv' receives 'csv' ;
select top 1000 words, count(*) This reads long comments
from (external script using environment perlinterp text from customer enquiry
receives (txt varchar(32000))
sends (words varchar(100)) table, in line perl converts
script S'endofperl( long text into output
while(<>)
{ stream of words (one word
chomp(); per row), query selects top
s/[,.!_]//g;
foreach $c (split(/ /)) 1000 words by frequency
{ if($c =~ /^[a-zA-Z]+$/) { print "$cn”} } using standard SQL
}
)endofperl' aggregation
from (select comments from customer_enquiry))dt
group by 1
order by 2 desc;
21. Innovative client solutions
TiVo Research & Analytics 40 TBs of RAM that perform complex media analytics,
cross‐correlating data from over 22 sources with set‐top box data to allow
Software advertisers, networks and agencies to analyze the ROI of creative campaigns
while they are still in flight, enabling self‐service reporting for business users
The VivaKi Nerve Center provides social media and other analytics for campaign
Public
monitoring and near real‐time advertising effectiveness. This enables agencies in the
Cloud Publicis Global Network to provide deep‐dive analytics into TBs of data in seconds
AIMIA provides self‐service customer loyalty analysis on over 24 billion transactions
that are live in‐memory full volumes of POS data. Retailers, Customer Packaged Goods
Appliance companies and other service providers, provide merchandise managers with “train‐of‐
thought” analysis to better target customers.
Orbitz leverages Kognitio Cloud to take large volumes of complex data, ingested in
Private real time from web channels, demographic and psychographic data, customer
Cloud segmentation and modeling scores and turn it into actionable intelligence, allowing
them to think of new ways of offering the right products and services to its current
and prospective client base.
PlaceIQ provides actionable hyper‐local Mobile BI location intelligence. They
leverage Kognitio to extracts intelligence from large amounts of place, social and
Public
mobile location‐based data to create hyper‐local, targetable audience profiles,
Cloud giving advertisers the power to connect with consumers at the right place, at the
right time, with the right message.
22. Analytics on tens of billions of events in
tens of seconds with NO DBA
Context for media analytics:
• In‐memory analytical database for Big Data
• Correlate everything to everything
• MPP + Linear Scalability
• Predictable and ultra‐fast performance
Challenges • > 22 data sources
– Expanding volumes of data
• Commodity servers/equipment
– Few opportunities for
summarization (demographics, • Market‐available IT skills
purchaser targets, etc.)
• No solution re‐engineering
– Data too large/complex for
traditional database systems
– Need for simple administration
Solution Benefits Mars, Inc.:
– Reports allow advertisers, networks and agencies to analyze the “By using TRA to improve media plans, creative and
relative strengths and weaknesses of different creative flighting, Mars has achieved a portfolio increase in ROI
executions, and how such variables as program environment, versus a year ago of 25% in one category and 35% in a
time slots, and pod position impact their ROI second category.”
– Enables self‐service reporting for business users
23. Case Study: AIMIA
In-memory analytics enable market basket analysis on with blazing speed
Background Challenge
Loyalty marketing company that provides • Offer a near-time analytical
marketing and consulting services to retailers, environment where all EPOS
service providers, and consumer packaged transactions, not just sampled
goods companies. Their Self-Service data, could be analyzed.
application offers “train-of-thought” analysis (improve statistical confidence)
with near real-time data processing, enabling • Enable analysts to write a query
clients to better target customers. and DB execute (no involvement
from IT/DBAs)
Solution
AIMIA lands a Kognitio Analytical Appliance they re-sell to each of their end-user
clients, with years of full volume EPOS transactions + customer + product data (over
24 Billion transactions currently). All transactions are held in memory for complex
basket analysis-type queries.
Results
Best-tuned Oracle RAC query ran in 25 min. same query Kognitio: 3 minutes!
That was in the initial implementation, circa 2007.
Today, average bundle of 12-18 queries runs in 90 seconds!
24. Gartner: Kognitio is “visionary”
Strengths - Commentary
• Consistent leadership with innovative pricing models
• Pioneered data warehouse SaaS
• Kognitio Cloud "on demand" cloud offering key for
growing clients
• Unique ability to switch between Cloud and Platform
• Meets Gartner Logical Data Warehouse concept
• Innovative Hadoop integration
• Great performance
• Consistently satisfied clients with its great
performance
• Makes it easier to use and run ad hoc queries
• Recognized the shift from traditional warehousing
• New features have extended capabilities to manage
external processes and data
26. Think differently about business analytics
Business users require:
• True ad-hoc analysis
• Performance “at the glass”
• Less reliance on IT
• Evolution required for Big Data Analytics:
– Lower reliance on OLAP cubes and associated admin.
– Stop building multiple dependent data marts, databases, etc.
– Bring Hadoop in new use cases:
• “Dark Data”: Web, Social, History, etc.
• Enable noSQL interoperability with existing tools