The document discusses visualizing a scene from the movie Interstellar using projections. It begins with slides explaining the scene but contains repetitive questioning of "WTF is this?". It then discusses simplifying the visualization using projections by lowering the dimension of matrices to make them understandable for visualization purposes.
Learning RBM(Restricted Boltzmann Machine in Practice)Mad Scientists
In Deep Learning, learning RBM is basic hierarchical components of the layer. In this slide, we can learn basic components of RBM (bipartite graph, Gibbs Sampling, Contrastive Divergence (1-CD), Energy function of entropy).
Convolutional Deep Belief Networks for Scalable Unsupervised Learning of Hier...Mad Scientists
Convolutional Deep Belief Networks for Scalable Unsupervised Learning of Hierarchical Representations, Honglak Lee(ICML 2009)
석사과정 세미나 발표를 위해 논문을 읽고 분석한 내용입니다. CDBN은 CNN와 DBN의 장점을 결합하여 translation invariance와 computational competence를 확보하였고, probabilistic max-pooling을 통해 image restoration을 할 수 있는 undirected DBM을 구성할 수 있게 합니다.
Sogang University Machine Learning and Data Mining lab seminar, Neural Networks for newbies and Convolutional Neural Networks. This is prerequisite material to understand deep convolutional architecture.
Relational Mate Value: Consensus and Uniqueness in Romantic EavaluationsMad Scientists
Usually we define our mate value as a traits. That is called 'Classic model' in Social Exchange theory. However, in this presentation, we will introduce 'Relational model', which examines person's 'uniqueness'. In this paper the author found that as time goes by, the mate value rate will be more precise when we apply Relational Model. Instead of Classical Mate model. Thus, we are going to discuss how they measure of these traits in detail.
While I am visiting Finland, I felt a lot of things already had flew behind me. That is because I was not familiar with Finland's start-up culture. Frankly speaking, I was not familiar with start-up culture radically.
All I heard about that before is only from youtube, coursera, or books. However, when I stayed in Finland, I felt so much people in Finland already had adapted start-up culture and entrepreneurship's mindset.
This presentation represents my experience at that time in 2012 Finland, Helsinki. Especially, I want to say thank you to Fastr books, Catch box team, and Startup sauna. Without them, these presentation and what I felt won't came out like this.
I am very proud that those companies and organisation be my friend. I hope many of Korean entreprenuers read this presentation and be stimulated by themselves to grow.
Face Feature Recognition System with Deep Belief Networks, for Korean/KIISE T...Mad Scientists
I submitted KIISE Thesis that <face>, 2014.
In this presentation, I present why I use deep learning to find facial features and what is limitation of before method.
We think that Superhero movies are not cultural, not ideological, but it is totally cultural. Moreover, it delivers Americanization as if it is Globalization. We analyze this topic on cultural way, and suggest its ideal way.
[SW Maestro] Team Loclas 1-2 Final PresentationMad Scientists
Using Datamining, we classify 90% of non-logged on person's preference by their searching keywords.
It is sufficient to use because it makes useless value to useful and easy-to-performing a target marketing.
자본주의는 클래스가 있음을 인정하는 사회이나, 분명한것은 그 클래스가 민주주의가 보장하는 자유나 권리를 침해해선 안된다. 이 발표는 그러한 원칙을 어기고 있는 광고를 소개함으로써 현대 캐피탈이 생산하는 Class의 개념이 현재 민주주의가 가지고 있는 모든 이들의 권리를 침해하는 새로운 Class임을 밝히고, 분석한다
Learning RBM(Restricted Boltzmann Machine in Practice)Mad Scientists
In Deep Learning, learning RBM is basic hierarchical components of the layer. In this slide, we can learn basic components of RBM (bipartite graph, Gibbs Sampling, Contrastive Divergence (1-CD), Energy function of entropy).
Convolutional Deep Belief Networks for Scalable Unsupervised Learning of Hier...Mad Scientists
Convolutional Deep Belief Networks for Scalable Unsupervised Learning of Hierarchical Representations, Honglak Lee(ICML 2009)
석사과정 세미나 발표를 위해 논문을 읽고 분석한 내용입니다. CDBN은 CNN와 DBN의 장점을 결합하여 translation invariance와 computational competence를 확보하였고, probabilistic max-pooling을 통해 image restoration을 할 수 있는 undirected DBM을 구성할 수 있게 합니다.
Sogang University Machine Learning and Data Mining lab seminar, Neural Networks for newbies and Convolutional Neural Networks. This is prerequisite material to understand deep convolutional architecture.
Relational Mate Value: Consensus and Uniqueness in Romantic EavaluationsMad Scientists
Usually we define our mate value as a traits. That is called 'Classic model' in Social Exchange theory. However, in this presentation, we will introduce 'Relational model', which examines person's 'uniqueness'. In this paper the author found that as time goes by, the mate value rate will be more precise when we apply Relational Model. Instead of Classical Mate model. Thus, we are going to discuss how they measure of these traits in detail.
While I am visiting Finland, I felt a lot of things already had flew behind me. That is because I was not familiar with Finland's start-up culture. Frankly speaking, I was not familiar with start-up culture radically.
All I heard about that before is only from youtube, coursera, or books. However, when I stayed in Finland, I felt so much people in Finland already had adapted start-up culture and entrepreneurship's mindset.
This presentation represents my experience at that time in 2012 Finland, Helsinki. Especially, I want to say thank you to Fastr books, Catch box team, and Startup sauna. Without them, these presentation and what I felt won't came out like this.
I am very proud that those companies and organisation be my friend. I hope many of Korean entreprenuers read this presentation and be stimulated by themselves to grow.
Face Feature Recognition System with Deep Belief Networks, for Korean/KIISE T...Mad Scientists
I submitted KIISE Thesis that <face>, 2014.
In this presentation, I present why I use deep learning to find facial features and what is limitation of before method.
We think that Superhero movies are not cultural, not ideological, but it is totally cultural. Moreover, it delivers Americanization as if it is Globalization. We analyze this topic on cultural way, and suggest its ideal way.
[SW Maestro] Team Loclas 1-2 Final PresentationMad Scientists
Using Datamining, we classify 90% of non-logged on person's preference by their searching keywords.
It is sufficient to use because it makes useless value to useful and easy-to-performing a target marketing.
자본주의는 클래스가 있음을 인정하는 사회이나, 분명한것은 그 클래스가 민주주의가 보장하는 자유나 권리를 침해해선 안된다. 이 발표는 그러한 원칙을 어기고 있는 광고를 소개함으로써 현대 캐피탈이 생산하는 Class의 개념이 현재 민주주의가 가지고 있는 모든 이들의 권리를 침해하는 새로운 Class임을 밝히고, 분석한다
06-04-2024 - NYC Tech Week - Discussion on Vector Databases, Unstructured Data and AI
Discussion on Vector Databases, Unstructured Data and AI
https://www.meetup.com/unstructured-data-meetup-new-york/
This meetup is for people working in unstructured data. Speakers will come present about related topics such as vector databases, LLMs, and managing data at scale. The intended audience of this group includes roles like machine learning engineers, data scientists, data engineers, software engineers, and PMs.This meetup was formerly Milvus Meetup, and is sponsored by Zilliz maintainers of Milvus.
Techniques to optimize the pagerank algorithm usually fall in two categories. One is to try reducing the work per iteration, and the other is to try reducing the number of iterations. These goals are often at odds with one another. Skipping computation on vertices which have already converged has the potential to save iteration time. Skipping in-identical vertices, with the same in-links, helps reduce duplicate computations and thus could help reduce iteration time. Road networks often have chains which can be short-circuited before pagerank computation to improve performance. Final ranks of chain nodes can be easily calculated. This could reduce both the iteration time, and the number of iterations. If a graph has no dangling nodes, pagerank of each strongly connected component can be computed in topological order. This could help reduce the iteration time, no. of iterations, and also enable multi-iteration concurrency in pagerank computation. The combination of all of the above methods is the STICD algorithm. [sticd] For dynamic graphs, unchanged components whose ranks are unaffected can be skipped altogether.
The Building Blocks of QuestDB, a Time Series Databasejavier ramirez
Talk Delivered at Valencia Codes Meetup 2024-06.
Traditionally, databases have treated timestamps just as another data type. However, when performing real-time analytics, timestamps should be first class citizens and we need rich time semantics to get the most out of our data. We also need to deal with ever growing datasets while keeping performant, which is as fun as it sounds.
It is no wonder time-series databases are now more popular than ever before. Join me in this session to learn about the internal architecture and building blocks of QuestDB, an open source time-series database designed for speed. We will also review a history of some of the changes we have gone over the past two years to deal with late and unordered data, non-blocking writes, read-replicas, or faster batch ingestion.
Levelwise PageRank with Loop-Based Dead End Handling Strategy : SHORT REPORT ...Subhajit Sahu
Abstract — Levelwise PageRank is an alternative method of PageRank computation which decomposes the input graph into a directed acyclic block-graph of strongly connected components, and processes them in topological order, one level at a time. This enables calculation for ranks in a distributed fashion without per-iteration communication, unlike the standard method where all vertices are processed in each iteration. It however comes with a precondition of the absence of dead ends in the input graph. Here, the native non-distributed performance of Levelwise PageRank was compared against Monolithic PageRank on a CPU as well as a GPU. To ensure a fair comparison, Monolithic PageRank was also performed on a graph where vertices were split by components. Results indicate that Levelwise PageRank is about as fast as Monolithic PageRank on the CPU, but quite a bit slower on the GPU. Slowdown on the GPU is likely caused by a large submission of small workloads, and expected to be non-issue when the computation is performed on massive graphs.