This very short document contains a link to a Facebook page called "Hadoopers" and the username of the person who posted it, @secooler. It ends with the phrase "Good luck."
This document contains a list of URLs related to an electrician site in Toledo, Spain. It includes the home page and pages for finding an authorized, cheap, or urgent electrician in Toledo as well as pages for electricians and electrical companies in Toledo and a nearby industrial park.
Energy management solutions. A Framework for your Energy Management SystemYokogawa
ISO50001 International Standard, a Framework for your Energy Management System. Energy security, energy cost and climate change are global concerns. As a first step to cope with these challenges, top management need to measure energy metrics effectively and look for formal processes to execute energy management policies.
Bioschemas Community: Developing profiles over Schema.org to make life scienc...Alasdair Gray
The Bioschemas community (http://bioschemas.org) is a loose collaboration formed by a wide range of life science resource providers and informaticians. The community is developing profiles over Schema.org to enable life science resources such as data about a specific protein, sample, or training event, to be more discoverable on the web. While the content of well-known resources such as Uniprot (for protein data) are easily discoverable, there is a long tail of specialist resources that would benefit from embedding Schema.org markup in a standardised approach.
The community have developed twelve profiles for specific types of life science resources (http://bioschemas.org/specifications/), with another six at an early draft stage. For each profile, a set of use cases have been identified. These typically focus on search, but several facilitate lightweight data exchange to support data aggregators such as Identifiers.org, FAIRsharing.org, and BioSamples. The next stage of the development of a profile consists of mapping the terms used in the use cases to existing properties in Schema.org and domain ontologies. The properties are then prioritised in order to support the use cases, with a minimal set of about six properties identified, along with a larger set of recommended and optional properties. For each property, an expected cardinality is defined and where appropriate, object values are specified from controlled vocabularies. Before a profile is finalised, it must first be demonstrated that resources can deploy the markup.
In this talk, we will outline the progress that has been made by the Bioschemas Community in a single year through three hackathon events. We will discuss the processes followed by the Bioschemas Community to foster collaboration, and highlight the benefits and drawbacks of using open Google documents and spreadsheets to support the community develop the profiles. We will conclude by summarising future opportunities and directions for the community.
This document describes the Airwil Green Avenue affordable housing project in Greater Noida, highlighting its amenities like a jogging track, open air theatre, central green space, parks, gym, and spa facilities. The project is located on a corner plot next to a 130m wide expressway and 500m from a proposed metro line, providing excellent connectivity.
Verso i bigdata giudiziari? (Nexa Torino, luglio 2016)Simone Aliprandi
Le slides utilizzate da Simone Aliprandi per il seminario "Verso i bigdata giudiziari? Problemi di privacy e copyright nella diffusione di sentenze sul web" tenutosi al Centro Nexa del Politecnico di Torino (info: http://juriswiki.it/news/al-politecnico-di-torino-si-parla-di-bigdata-giudiziari-e-di-juriswiki)
위 자료는 BOAZ 2016 하반기 프로젝트 주제의 하나로, Advanced 정규세션 동안 Base 정규세션에서 배웠던 다양한 이론들과 기본 지식들, 그리고 툴 활용능력들을 직접 실행하며 진행한 결과물입니다.
*** 서울시 2030 나홀로족을 위한 라이프 가이드북 ***
서울에 거주하는 2030 나홀로족을 위해 제작된 라이프 가이드북. 이 가이드북의 주목적은 먹는 것(식) 그리고 사는 것(주)에 대해서 그에 관한 정보를 주는 것임.
6기 김승효 중앙대학교 응용통계학과
6기 김재은 이화여자대학교 시각디자인과
7기 박다혜 한국외국어대학교 통계학과
** 국내 최초 대학생 빅데이터 연합동아리 BOAZ **
Blog : http://BOAZbigdata.com
Facebook : http://fb.com/BOAZbigdata
Oxalide MorningTech #1 - BigData
1er MorningTech @Oxalide, animé par Ludovic Piot (@lpiot), le 15 décembre 2016.
Pour cette 1ère édition du Morning Tech nous vous proposons une overview sur un des thèmes du moment : le Big Data.
Au delà de ce buzz word nous aborderons :
Les grands concepts
Les étapes clés des projets Big Data et les technologies à utiliser (stockage, ingestion, …)
Les enjeux des architectures Big Data (architecture lambda, …)
L'intelligence artificielle (machine learning, deep learning, …)
Et nous finirons par un cas d'usage du big data sur AWS autour de l'utilisation des données gyroscopiques de vos internautes mobiles
Subject: Oxalide's 1st MorningTech talk about BigData.
Date: 15-dec-2016
Speakers: Ludovic Piot (@lpiot, @oxalide)
Language: french
Lien SpeakerDeck : https://speakerdeck.com/lpiot/oxalide-morningtech-number-1-bigdata
Lien SlideShare : https://www.slideshare.net/LudovicPiot/oxalide-morningtech-1-bigdata
YouTube Video capture: https://youtu.be/7O85lRzvMY0
Main topics:
* Les grands enjeux du BigData
** les 3 V du Gartner : volume, variété, vélocité
* Le stockage des données
** datalake
** les technos
* L'ingestion des données
** ETL
** datastream
** les technos
* Les enjeux du compute
** map-reduce
** spark
** lambda architecture
* Démo d'une plateforme BigData sur AWS
* L'intelligence artificielle
** datascience exploratoire et notebooks,
** machine learning,
** deep learning,
** data pipeline
** les technos
* Pour aller plus loin
** La gouvernance des données
** La dataviz
This document contains a list of URLs related to an electrician site in Toledo, Spain. It includes the home page and pages for finding an authorized, cheap, or urgent electrician in Toledo as well as pages for electricians and electrical companies in Toledo and a nearby industrial park.
Energy management solutions. A Framework for your Energy Management SystemYokogawa
ISO50001 International Standard, a Framework for your Energy Management System. Energy security, energy cost and climate change are global concerns. As a first step to cope with these challenges, top management need to measure energy metrics effectively and look for formal processes to execute energy management policies.
Bioschemas Community: Developing profiles over Schema.org to make life scienc...Alasdair Gray
The Bioschemas community (http://bioschemas.org) is a loose collaboration formed by a wide range of life science resource providers and informaticians. The community is developing profiles over Schema.org to enable life science resources such as data about a specific protein, sample, or training event, to be more discoverable on the web. While the content of well-known resources such as Uniprot (for protein data) are easily discoverable, there is a long tail of specialist resources that would benefit from embedding Schema.org markup in a standardised approach.
The community have developed twelve profiles for specific types of life science resources (http://bioschemas.org/specifications/), with another six at an early draft stage. For each profile, a set of use cases have been identified. These typically focus on search, but several facilitate lightweight data exchange to support data aggregators such as Identifiers.org, FAIRsharing.org, and BioSamples. The next stage of the development of a profile consists of mapping the terms used in the use cases to existing properties in Schema.org and domain ontologies. The properties are then prioritised in order to support the use cases, with a minimal set of about six properties identified, along with a larger set of recommended and optional properties. For each property, an expected cardinality is defined and where appropriate, object values are specified from controlled vocabularies. Before a profile is finalised, it must first be demonstrated that resources can deploy the markup.
In this talk, we will outline the progress that has been made by the Bioschemas Community in a single year through three hackathon events. We will discuss the processes followed by the Bioschemas Community to foster collaboration, and highlight the benefits and drawbacks of using open Google documents and spreadsheets to support the community develop the profiles. We will conclude by summarising future opportunities and directions for the community.
This document describes the Airwil Green Avenue affordable housing project in Greater Noida, highlighting its amenities like a jogging track, open air theatre, central green space, parks, gym, and spa facilities. The project is located on a corner plot next to a 130m wide expressway and 500m from a proposed metro line, providing excellent connectivity.
Verso i bigdata giudiziari? (Nexa Torino, luglio 2016)Simone Aliprandi
Le slides utilizzate da Simone Aliprandi per il seminario "Verso i bigdata giudiziari? Problemi di privacy e copyright nella diffusione di sentenze sul web" tenutosi al Centro Nexa del Politecnico di Torino (info: http://juriswiki.it/news/al-politecnico-di-torino-si-parla-di-bigdata-giudiziari-e-di-juriswiki)
위 자료는 BOAZ 2016 하반기 프로젝트 주제의 하나로, Advanced 정규세션 동안 Base 정규세션에서 배웠던 다양한 이론들과 기본 지식들, 그리고 툴 활용능력들을 직접 실행하며 진행한 결과물입니다.
*** 서울시 2030 나홀로족을 위한 라이프 가이드북 ***
서울에 거주하는 2030 나홀로족을 위해 제작된 라이프 가이드북. 이 가이드북의 주목적은 먹는 것(식) 그리고 사는 것(주)에 대해서 그에 관한 정보를 주는 것임.
6기 김승효 중앙대학교 응용통계학과
6기 김재은 이화여자대학교 시각디자인과
7기 박다혜 한국외국어대학교 통계학과
** 국내 최초 대학생 빅데이터 연합동아리 BOAZ **
Blog : http://BOAZbigdata.com
Facebook : http://fb.com/BOAZbigdata
Oxalide MorningTech #1 - BigData
1er MorningTech @Oxalide, animé par Ludovic Piot (@lpiot), le 15 décembre 2016.
Pour cette 1ère édition du Morning Tech nous vous proposons une overview sur un des thèmes du moment : le Big Data.
Au delà de ce buzz word nous aborderons :
Les grands concepts
Les étapes clés des projets Big Data et les technologies à utiliser (stockage, ingestion, …)
Les enjeux des architectures Big Data (architecture lambda, …)
L'intelligence artificielle (machine learning, deep learning, …)
Et nous finirons par un cas d'usage du big data sur AWS autour de l'utilisation des données gyroscopiques de vos internautes mobiles
Subject: Oxalide's 1st MorningTech talk about BigData.
Date: 15-dec-2016
Speakers: Ludovic Piot (@lpiot, @oxalide)
Language: french
Lien SpeakerDeck : https://speakerdeck.com/lpiot/oxalide-morningtech-number-1-bigdata
Lien SlideShare : https://www.slideshare.net/LudovicPiot/oxalide-morningtech-1-bigdata
YouTube Video capture: https://youtu.be/7O85lRzvMY0
Main topics:
* Les grands enjeux du BigData
** les 3 V du Gartner : volume, variété, vélocité
* Le stockage des données
** datalake
** les technos
* L'ingestion des données
** ETL
** datastream
** les technos
* Les enjeux du compute
** map-reduce
** spark
** lambda architecture
* Démo d'une plateforme BigData sur AWS
* L'intelligence artificielle
** datascience exploratoire et notebooks,
** machine learning,
** deep learning,
** data pipeline
** les technos
* Pour aller plus loin
** La gouvernance des données
** La dataviz
DNA - Einstein - Data science ja bigdataRolf Koski
This document discusses DNA's journey in data science and big data. It summarizes that the big things driving change were the omnichannel customer demanding more data and analytics, and new technologies like cloud computing and data science providing endless scale and processing power. It outlines DNA's achievements in using these technologies to understand customers better, increase sales and marketing ROI, and automate many processes. Upcoming areas discussed include expanding into artificial intelligence, chatbots, and understanding speech. Culture aspects emphasized include having thinker-doers who can code leading projects and openly demonstrating work to connect with others.
This document provides an overview of Hadoop and Big Data. It begins with introducing key concepts like structured, semi-structured, and unstructured data. It then discusses the growth of data and need for Big Data solutions. The core components of Hadoop like HDFS and MapReduce are explained at a high level. The document also covers Hadoop architecture, installation, and developing a basic MapReduce program.
Retour d'expérience Large IoT project / BigData : détail du cas réel de Hager...FactoVia
Vincent Thavonekham et Philippe Guédez de VISEO,
Reactive Extension Rx.Net architecture to overcome the SigFox limits and absorb the heavy IoT load with BigData (using ASP.Net Core Microservices). ALL TECHNICAL DETAILS unveailed.
The document provides an introduction to the concepts of big data and how it can be analyzed. It discusses how traditional tools cannot handle large data files exceeding gigabytes in size. It then introduces the concepts of distributed computing using MapReduce and the Hadoop framework. Hadoop makes it possible to easily store and process very large datasets across a cluster of commodity servers. It also discusses programming interfaces like Hive and Pig that simplify writing MapReduce programs without needing to use Java.
Integración Bigdata: punto de entrada al IoT - LibreCon 2016LibreCon
El documento habla sobre la integración de Big Data como punto de entrada al Internet de las Cosas. Explica el origen de Big Data y cómo se usa para ingestar y consumir datos de navegación web, servidores y dispositivos. También discute cómo los datos del Internet de las Cosas se pueden explotar usando Big Data y como un piloto monitoreó la actividad en PCs y teléfonos móviles. Concluye que existen tecnologías de software libre para Big Data e Internet de las Cosas y aunque hay muchas opciones, algunas tecn
This document provides an overview of big data. It defines big data as large volumes of diverse data that are growing rapidly and require new techniques to capture, store, distribute, manage, and analyze. The key characteristics of big data are volume, velocity, and variety. Common sources of big data include sensors, mobile devices, social media, and business transactions. Tools like Hadoop and MapReduce are used to store and process big data across distributed systems. Applications of big data include smarter healthcare, traffic control, and personalized marketing. The future of big data is promising with the market expected to grow substantially in the coming years.
Big data refers to the massive amounts of unstructured data that are growing exponentially. Hadoop is an open-source framework that allows processing and storing large data sets across clusters of commodity hardware. It provides reliability and scalability through its distributed file system HDFS and MapReduce programming model. The Hadoop ecosystem includes components like Hive, Pig, HBase, Flume, Oozie, and Mahout that provide SQL-like queries, data flows, NoSQL capabilities, data ingestion, workflows, and machine learning. Microsoft integrates Hadoop with its BI and analytics tools to enable insights from diverse data sources.
This presentation, by big data guru Bernard Marr, outlines in simple terms what Big Data is and how it is used today. It covers the 5 V's of Big Data as well as a number of high value use cases.
This document provides an overview of bio big data and related technologies. It discusses what big data is and why bio big data is necessary given the large size of genomic data sets. It then outlines and describes Hadoop, Spark, machine learning, and streaming in the context of bio big data. For Hadoop, it explains HDFS, MapReduce, and the Hadoop ecosystem. For Spark, it covers RDDs, Spark SQL, MLlib, and Spark Streaming. The document is intended as an introduction to key concepts and tools for working with large biological data sets.
The document is a slide deck for a training on Hadoop fundamentals. It includes an agenda that covers what big data is, an introduction to Hadoop, the Hadoop architecture, MapReduce, Pig, Hive, Jaql, and certification. It provides overviews and explanations of these topics through multiple slides with images and text. The slides also describe hands-on labs for attendees to complete exercises using these big data technologies.
Hadoop is a framework for distributed storage and processing of large datasets across clusters of computers using simple programming models. It allows for the reliable, scalable, and distributed processing of large data sets across clusters of commodity hardware. Hadoop can handle data from multiple sources and formats in a unified manner.
To Get any Project for CSE, IT ECE, EEE Contact Me @ 09666155510, 09849539085 or mail us - ieeefinalsemprojects@gmail.com-Visit Our Website: www.finalyearprojects.org
This document discusses building k-nearest neighbor graphs from large text data. It presents a method called CTPH that uses locality-sensitive hashing to efficiently construct k-nn graphs at scale. The method was tested on datasets of 200k to 800k spam subject lines. Results showed CTPH was up to 10x faster than alternative map-reduce approaches while achieving reasonable recall, though recall was limited. Future work to improve recall and evaluate graph quality was discussed.
This document provides an overview of Hadoop and big data use cases. It discusses the evolution of business analytics and data processing, as well as the architecture of traditional RDBMS systems compared to Hadoop. Examples of how companies have used Hadoop include a bank improving risk modeling by combining customer data, a telecom reducing churn by analyzing call logs, and a retailer targeting promotions by analyzing point-of-sale transactions. Hadoop allows these companies to gain valuable business insights from large and diverse data sources.
DNA - Einstein - Data science ja bigdataRolf Koski
This document discusses DNA's journey in data science and big data. It summarizes that the big things driving change were the omnichannel customer demanding more data and analytics, and new technologies like cloud computing and data science providing endless scale and processing power. It outlines DNA's achievements in using these technologies to understand customers better, increase sales and marketing ROI, and automate many processes. Upcoming areas discussed include expanding into artificial intelligence, chatbots, and understanding speech. Culture aspects emphasized include having thinker-doers who can code leading projects and openly demonstrating work to connect with others.
This document provides an overview of Hadoop and Big Data. It begins with introducing key concepts like structured, semi-structured, and unstructured data. It then discusses the growth of data and need for Big Data solutions. The core components of Hadoop like HDFS and MapReduce are explained at a high level. The document also covers Hadoop architecture, installation, and developing a basic MapReduce program.
Retour d'expérience Large IoT project / BigData : détail du cas réel de Hager...FactoVia
Vincent Thavonekham et Philippe Guédez de VISEO,
Reactive Extension Rx.Net architecture to overcome the SigFox limits and absorb the heavy IoT load with BigData (using ASP.Net Core Microservices). ALL TECHNICAL DETAILS unveailed.
The document provides an introduction to the concepts of big data and how it can be analyzed. It discusses how traditional tools cannot handle large data files exceeding gigabytes in size. It then introduces the concepts of distributed computing using MapReduce and the Hadoop framework. Hadoop makes it possible to easily store and process very large datasets across a cluster of commodity servers. It also discusses programming interfaces like Hive and Pig that simplify writing MapReduce programs without needing to use Java.
Integración Bigdata: punto de entrada al IoT - LibreCon 2016LibreCon
El documento habla sobre la integración de Big Data como punto de entrada al Internet de las Cosas. Explica el origen de Big Data y cómo se usa para ingestar y consumir datos de navegación web, servidores y dispositivos. También discute cómo los datos del Internet de las Cosas se pueden explotar usando Big Data y como un piloto monitoreó la actividad en PCs y teléfonos móviles. Concluye que existen tecnologías de software libre para Big Data e Internet de las Cosas y aunque hay muchas opciones, algunas tecn
This document provides an overview of big data. It defines big data as large volumes of diverse data that are growing rapidly and require new techniques to capture, store, distribute, manage, and analyze. The key characteristics of big data are volume, velocity, and variety. Common sources of big data include sensors, mobile devices, social media, and business transactions. Tools like Hadoop and MapReduce are used to store and process big data across distributed systems. Applications of big data include smarter healthcare, traffic control, and personalized marketing. The future of big data is promising with the market expected to grow substantially in the coming years.
Big data refers to the massive amounts of unstructured data that are growing exponentially. Hadoop is an open-source framework that allows processing and storing large data sets across clusters of commodity hardware. It provides reliability and scalability through its distributed file system HDFS and MapReduce programming model. The Hadoop ecosystem includes components like Hive, Pig, HBase, Flume, Oozie, and Mahout that provide SQL-like queries, data flows, NoSQL capabilities, data ingestion, workflows, and machine learning. Microsoft integrates Hadoop with its BI and analytics tools to enable insights from diverse data sources.
This presentation, by big data guru Bernard Marr, outlines in simple terms what Big Data is and how it is used today. It covers the 5 V's of Big Data as well as a number of high value use cases.
This document provides an overview of bio big data and related technologies. It discusses what big data is and why bio big data is necessary given the large size of genomic data sets. It then outlines and describes Hadoop, Spark, machine learning, and streaming in the context of bio big data. For Hadoop, it explains HDFS, MapReduce, and the Hadoop ecosystem. For Spark, it covers RDDs, Spark SQL, MLlib, and Spark Streaming. The document is intended as an introduction to key concepts and tools for working with large biological data sets.
The document is a slide deck for a training on Hadoop fundamentals. It includes an agenda that covers what big data is, an introduction to Hadoop, the Hadoop architecture, MapReduce, Pig, Hive, Jaql, and certification. It provides overviews and explanations of these topics through multiple slides with images and text. The slides also describe hands-on labs for attendees to complete exercises using these big data technologies.
Hadoop is a framework for distributed storage and processing of large datasets across clusters of computers using simple programming models. It allows for the reliable, scalable, and distributed processing of large data sets across clusters of commodity hardware. Hadoop can handle data from multiple sources and formats in a unified manner.
To Get any Project for CSE, IT ECE, EEE Contact Me @ 09666155510, 09849539085 or mail us - ieeefinalsemprojects@gmail.com-Visit Our Website: www.finalyearprojects.org
This document discusses building k-nearest neighbor graphs from large text data. It presents a method called CTPH that uses locality-sensitive hashing to efficiently construct k-nn graphs at scale. The method was tested on datasets of 200k to 800k spam subject lines. Results showed CTPH was up to 10x faster than alternative map-reduce approaches while achieving reasonable recall, though recall was limited. Future work to improve recall and evaluate graph quality was discussed.
This document provides an overview of Hadoop and big data use cases. It discusses the evolution of business analytics and data processing, as well as the architecture of traditional RDBMS systems compared to Hadoop. Examples of how companies have used Hadoop include a bank improving risk modeling by combining customer data, a telecom reducing churn by analyzing call logs, and a retailer targeting promotions by analyzing point-of-sale transactions. Hadoop allows these companies to gain valuable business insights from large and diverse data sources.
ViewShift: Hassle-free Dynamic Policy Enforcement for Every Data LakeWalaa Eldin Moustafa
Dynamic policy enforcement is becoming an increasingly important topic in today’s world where data privacy and compliance is a top priority for companies, individuals, and regulators alike. In these slides, we discuss how LinkedIn implements a powerful dynamic policy enforcement engine, called ViewShift, and integrates it within its data lake. We show the query engine architecture and how catalog implementations can automatically route table resolutions to compliance-enforcing SQL views. Such views have a set of very interesting properties: (1) They are auto-generated from declarative data annotations. (2) They respect user-level consent and preferences (3) They are context-aware, encoding a different set of transformations for different use cases (4) They are portable; while the SQL logic is only implemented in one SQL dialect, it is accessible in all engines.
#SQL #Views #Privacy #Compliance #DataLake
Predictably Improve Your B2B Tech Company's Performance by Leveraging DataKiwi Creative
Harness the power of AI-backed reports, benchmarking and data analysis to predict trends and detect anomalies in your marketing efforts.
Peter Caputa, CEO at Databox, reveals how you can discover the strategies and tools to increase your growth rate (and margins!).
From metrics to track to data habits to pick up, enhance your reporting for powerful insights to improve your B2B tech company's marketing.
- - -
This is the webinar recording from the June 2024 HubSpot User Group (HUG) for B2B Technology USA.
Watch the video recording at https://youtu.be/5vjwGfPN9lw
Sign up for future HUG events at https://events.hubspot.com/b2b-technology-usa/
Codeless Generative AI Pipelines
(GenAI with Milvus)
https://ml.dssconf.pl/user.html#!/lecture/DSSML24-041a/rate
Discover the potential of real-time streaming in the context of GenAI as we delve into the intricacies of Apache NiFi and its capabilities. Learn how this tool can significantly simplify the data engineering workflow for GenAI applications, allowing you to focus on the creative aspects rather than the technical complexities. I will guide you through practical examples and use cases, showing the impact of automation on prompt building. From data ingestion to transformation and delivery, witness how Apache NiFi streamlines the entire pipeline, ensuring a smooth and hassle-free experience.
Timothy Spann
https://www.youtube.com/@FLaNK-Stack
https://medium.com/@tspann
https://www.datainmotion.dev/
milvus, unstructured data, vector database, zilliz, cloud, vectors, python, deep learning, generative ai, genai, nifi, kafka, flink, streaming, iot, edge
Global Situational Awareness of A.I. and where its headedvikram sood
You can see the future first in San Francisco.
Over the past year, the talk of the town has shifted from $10 billion compute clusters to $100 billion clusters to trillion-dollar clusters. Every six months another zero is added to the boardroom plans. Behind the scenes, there’s a fierce scramble to secure every power contract still available for the rest of the decade, every voltage transformer that can possibly be procured. American big business is gearing up to pour trillions of dollars into a long-unseen mobilization of American industrial might. By the end of the decade, American electricity production will have grown tens of percent; from the shale fields of Pennsylvania to the solar farms of Nevada, hundreds of millions of GPUs will hum.
The AGI race has begun. We are building machines that can think and reason. By 2025/26, these machines will outpace college graduates. By the end of the decade, they will be smarter than you or I; we will have superintelligence, in the true sense of the word. Along the way, national security forces not seen in half a century will be un-leashed, and before long, The Project will be on. If we’re lucky, we’ll be in an all-out race with the CCP; if we’re unlucky, an all-out war.
Everyone is now talking about AI, but few have the faintest glimmer of what is about to hit them. Nvidia analysts still think 2024 might be close to the peak. Mainstream pundits are stuck on the wilful blindness of “it’s just predicting the next word”. They see only hype and business-as-usual; at most they entertain another internet-scale technological change.
Before long, the world will wake up. But right now, there are perhaps a few hundred people, most of them in San Francisco and the AI labs, that have situational awareness. Through whatever peculiar forces of fate, I have found myself amongst them. A few years ago, these people were derided as crazy—but they trusted the trendlines, which allowed them to correctly predict the AI advances of the past few years. Whether these people are also right about the next few years remains to be seen. But these are very smart people—the smartest people I have ever met—and they are the ones building this technology. Perhaps they will be an odd footnote in history, or perhaps they will go down in history like Szilard and Oppenheimer and Teller. If they are seeing the future even close to correctly, we are in for a wild ride.
Let me tell you what we see.
06-04-2024 - NYC Tech Week - Discussion on Vector Databases, Unstructured Data and AI
Discussion on Vector Databases, Unstructured Data and AI
https://www.meetup.com/unstructured-data-meetup-new-york/
This meetup is for people working in unstructured data. Speakers will come present about related topics such as vector databases, LLMs, and managing data at scale. The intended audience of this group includes roles like machine learning engineers, data scientists, data engineers, software engineers, and PMs.This meetup was formerly Milvus Meetup, and is sponsored by Zilliz maintainers of Milvus.
4th Modern Marketing Reckoner by MMA Global India & Group M: 60+ experts on W...Social Samosa
The Modern Marketing Reckoner (MMR) is a comprehensive resource packed with POVs from 60+ industry leaders on how AI is transforming the 4 key pillars of marketing – product, place, price and promotions.
State of Artificial intelligence Report 2023kuntobimo2016
Artificial intelligence (AI) is a multidisciplinary field of science and engineering whose goal is to create intelligent machines.
We believe that AI will be a force multiplier on technological progress in our increasingly digital, data-driven world. This is because everything around us today, ranging from culture to consumer products, is a product of intelligence.
The State of AI Report is now in its sixth year. Consider this report as a compilation of the most interesting things we’ve seen with a goal of triggering an informed conversation about the state of AI and its implication for the future.
We consider the following key dimensions in our report:
Research: Technology breakthroughs and their capabilities.
Industry: Areas of commercial application for AI and its business impact.
Politics: Regulation of AI, its economic implications and the evolving geopolitics of AI.
Safety: Identifying and mitigating catastrophic risks that highly-capable future AI systems could pose to us.
Predictions: What we believe will happen in the next 12 months and a 2022 performance review to keep us honest.
06-04-2024 - NYC Tech Week - Discussion on Vector Databases, Unstructured Data and AI
Round table discussion of vector databases, unstructured data, ai, big data, real-time, robots and Milvus.
A lively discussion with NJ Gen AI Meetup Lead, Prasad and Procure.FYI's Co-Found