The document contains contact information for Ichigaku Takigawa including their email address ichigaku.takigawa@riken.jp, personal website URL https://itakigawa.github.io/, and mentions they are working with IBISML and ATR on materials informatics and bioinformatics. It also includes a link to their page https://itakigawa.page.link/IBISML for a PDF document.
ゼロから作るKubernetesによるJupyter as a Service ー Kubernetes Meetup Tokyo #43Preferred Networks
Preferred Networksでは新物質開発や材料探索を加速する汎用原子レベルシミュレータを利用できるクラウドサービスを開発しています。 顧客毎に独立した環境にユーザがJupyter Notebookを立ち上げ、自社PyPIパッケージによりAPI経由で弊社独自技術を簡単に利用できます。Kubernetesの機能を駆使してマルチテナント環境を構築しており、各顧客に独立したAPIサーバを提供し、その負荷状況によりAPIサーバをスケーリングさせたり、顧客毎にNotebookに対する通信制限や配置Nodeの制御などを実現しています。
本発表ではKubernetesによるマルチテナントJupyter as a Serviceの実現方法を紹介します。
【DL輪読会】Efficiently Modeling Long Sequences with Structured State SpacesDeep Learning JP
This document summarizes a research paper on modeling long-range dependencies in sequence data using structured state space models and deep learning. The proposed S4 model (1) derives recurrent and convolutional representations of state space models, (2) improves long-term memory using HiPPO matrices, and (3) efficiently computes state space model convolution kernels. Experiments show S4 outperforms existing methods on various long-range dependency tasks, achieves fast and memory-efficient computation comparable to efficient Transformers, and performs competitively as a general sequence model.
Several recent papers have explored self-supervised learning methods for vision transformers (ViT). Key approaches include:
1. Masked prediction tasks that predict masked patches of the input image.
2. Contrastive learning using techniques like MoCo to learn representations by contrasting augmented views of the same image.
3. Self-distillation methods like DINO that distill a teacher ViT into a student ViT using different views of the same image.
4. Hybrid approaches that combine masked prediction with self-distillation, such as iBOT.
ICCV19読み会 "Learning Single Camera Depth Estimation using Dual-Pixels"Hajime Mihara
第55回 コンピュータビジョン勉強会@関東 ICCV読み会の資料です。
"Learning Single Camera Depth Estimation using Dual-Pixels"について解説しております。
https://kantocv.connpass.com/event/148011/
The document contains contact information for Ichigaku Takigawa including their email address ichigaku.takigawa@riken.jp, personal website URL https://itakigawa.github.io/, and mentions they are working with IBISML and ATR on materials informatics and bioinformatics. It also includes a link to their page https://itakigawa.page.link/IBISML for a PDF document.
ゼロから作るKubernetesによるJupyter as a Service ー Kubernetes Meetup Tokyo #43Preferred Networks
Preferred Networksでは新物質開発や材料探索を加速する汎用原子レベルシミュレータを利用できるクラウドサービスを開発しています。 顧客毎に独立した環境にユーザがJupyter Notebookを立ち上げ、自社PyPIパッケージによりAPI経由で弊社独自技術を簡単に利用できます。Kubernetesの機能を駆使してマルチテナント環境を構築しており、各顧客に独立したAPIサーバを提供し、その負荷状況によりAPIサーバをスケーリングさせたり、顧客毎にNotebookに対する通信制限や配置Nodeの制御などを実現しています。
本発表ではKubernetesによるマルチテナントJupyter as a Serviceの実現方法を紹介します。
【DL輪読会】Efficiently Modeling Long Sequences with Structured State SpacesDeep Learning JP
This document summarizes a research paper on modeling long-range dependencies in sequence data using structured state space models and deep learning. The proposed S4 model (1) derives recurrent and convolutional representations of state space models, (2) improves long-term memory using HiPPO matrices, and (3) efficiently computes state space model convolution kernels. Experiments show S4 outperforms existing methods on various long-range dependency tasks, achieves fast and memory-efficient computation comparable to efficient Transformers, and performs competitively as a general sequence model.
Several recent papers have explored self-supervised learning methods for vision transformers (ViT). Key approaches include:
1. Masked prediction tasks that predict masked patches of the input image.
2. Contrastive learning using techniques like MoCo to learn representations by contrasting augmented views of the same image.
3. Self-distillation methods like DINO that distill a teacher ViT into a student ViT using different views of the same image.
4. Hybrid approaches that combine masked prediction with self-distillation, such as iBOT.
ICCV19読み会 "Learning Single Camera Depth Estimation using Dual-Pixels"Hajime Mihara
第55回 コンピュータビジョン勉強会@関東 ICCV読み会の資料です。
"Learning Single Camera Depth Estimation using Dual-Pixels"について解説しております。
https://kantocv.connpass.com/event/148011/
49. Copyright 2020 SAKURA internet Inc. All rights reserved.
補足:学習の参考資料
Python Programmer:
Keith Galli:
Data Science Dojo:
Data Professor:
Crash Course AI:
50. Copyright 2020 SAKURA internet Inc. All rights reserved.
補足:学習の参考資料
Towards Data Science:
PyData:
Distill:
Kaggle(Notebooks):
51. Copyright 2020 SAKURA internet Inc. All rights reserved.
補足:学習の参考資料
Copernicus MOOCs:
Earth Lab:
Geo-Python:
GEOG 883:
52. Copyright 2020 SAKURA internet Inc. All rights reserved. 52
7日でマスター!基礎から学ぶ衛星データ講座
7日間ありがとうございました!
自宅学習におすすめ!
11日でマスター衛星データの学び方ガイド2020
https://sorabatake.jp/11994/
もぜひご覧ください!
53. Copyright 2020 SAKURA internet Inc. All rights reserved. 53
7日でマスター!基礎から学ぶ衛星データ講座
SNSのフォロー、チャンネル登録お願いします!
Tellus公式Twitter 宙畑Twitter
54. Copyright 2020 SAKURA internet Inc. All rights reserved. 54
7日でマスター!基礎から学ぶ衛星データ講座
Tellus
https://www.tellusxdp.com/market/login/
55. Copyright 2020 SAKURA internet Inc. All rights reserved. 55
7日でマスター!
基礎から学ぶ衛星データ講座
presents