Several recent papers have explored self-supervised learning methods for vision transformers (ViT). Key approaches include:
1. Masked prediction tasks that predict masked patches of the input image.
2. Contrastive learning using techniques like MoCo to learn representations by contrasting augmented views of the same image.
3. Self-distillation methods like DINO that distill a teacher ViT into a student ViT using different views of the same image.
4. Hybrid approaches that combine masked prediction with self-distillation, such as iBOT.
Several recent papers have explored self-supervised learning methods for vision transformers (ViT). Key approaches include:
1. Masked prediction tasks that predict masked patches of the input image.
2. Contrastive learning using techniques like MoCo to learn representations by contrasting augmented views of the same image.
3. Self-distillation methods like DINO that distill a teacher ViT into a student ViT using different views of the same image.
4. Hybrid approaches that combine masked prediction with self-distillation, such as iBOT.
AAAI2023「Are Transformers Effective for Time Series Forecasting?」と、HuggingFace「Yes, Transformers are Effective for Time Series Forecasting (+ Autoformer)」の紹介です。
The document summarizes recent research related to "theory of mind" in multi-agent reinforcement learning. It discusses three papers that propose methods for agents to infer the intentions of other agents by applying concepts from theory of mind:
1. The papers propose that in multi-agent reinforcement learning, being able to understand the intentions of other agents could help with cooperation and increase success rates.
2. The methods aim to estimate the intentions of other agents by modeling their beliefs and private information, using ideas from theory of mind in cognitive science. This involves inferring information about other agents that is not directly observable.
3. Bayesian inference is often used to reason about the beliefs, goals and private information of other agents based
The document introduces several approaches to semi-supervised learning, including self-training, multi-view algorithms like co-training, generative models using EM, S3VMs which extend SVMs to incorporate unlabeled data, and graph-based algorithms. Semi-supervised learning can make use of large amounts of unlabeled data together with smaller amounts of labeled data to build accurate predictive models in domains where labeling data is expensive.
AAAI2023「Are Transformers Effective for Time Series Forecasting?」と、HuggingFace「Yes, Transformers are Effective for Time Series Forecasting (+ Autoformer)」の紹介です。
The document summarizes recent research related to "theory of mind" in multi-agent reinforcement learning. It discusses three papers that propose methods for agents to infer the intentions of other agents by applying concepts from theory of mind:
1. The papers propose that in multi-agent reinforcement learning, being able to understand the intentions of other agents could help with cooperation and increase success rates.
2. The methods aim to estimate the intentions of other agents by modeling their beliefs and private information, using ideas from theory of mind in cognitive science. This involves inferring information about other agents that is not directly observable.
3. Bayesian inference is often used to reason about the beliefs, goals and private information of other agents based
The document introduces several approaches to semi-supervised learning, including self-training, multi-view algorithms like co-training, generative models using EM, S3VMs which extend SVMs to incorporate unlabeled data, and graph-based algorithms. Semi-supervised learning can make use of large amounts of unlabeled data together with smaller amounts of labeled data to build accurate predictive models in domains where labeling data is expensive.
This document discusses Mahout, an Apache project for machine learning algorithms like classification, clustering, and pattern mining. It describes using Mahout with Hadoop to build a Naive Bayes classifier on Wikipedia data to classify articles into categories like "game" and "sports". The process includes splitting Wikipedia XML, training the classifier on Hadoop, and testing it to generate a confusion matrix. Mahout can also integrate with other systems like HBase for real-time classification.
19. 参考文献
[1] 竹内一郎,小川晃平,杉山将.機械学習における非凸最適化問題に対するパラメトリック計画法
を用いたアプローチ,2013 <http://www.kurims.kyoto-u.ac.jp/~kyodo/kokyuroku/contents/pdf/1829-
04.pdf>
[2] O. Chapelle and A. Zien. Semi-supervised classification by low density separation.
Tenth International Workshop on Artificial Intelligence and Statzstics, 2005. <
http://www.gatsby.ucl.ac.uk/aistats/fullpapers/198.pdf >
[3] S. Andrews, I. Tsochantaridis, T. Hofmann. Support Vector Machines for Multiple-Instance
Learning. NIPS 2002 < https://www.robots.ox.ac.uk/~vgg/rg/papers/andrews_etal_NIPS02.pdf >
[4] 朱鷺の社 Wiki.CCCP < http://ibisforest.org/index.php?CCCP >