【DL輪読会】Incorporating group update for speech enhancement based on convolutio...Deep Learning JP
1. The document discusses a research paper on speech enhancement using a convolutional gated recurrent network (CGRN) and ordered neuron long short-term memory (ON-LSTM).
2. The proposed method aims to improve speech quality by incorporating both time and frequency dependencies using CGRN, and handling noise with varying change rates using ON-LSTM.
3. CGRN replaces fully-connected layers with convolutions, allowing it to capture local spatial structures in the frequency domain. ON-LSTM groups neurons based on the change rate of internal information to model hierarchical representations.
파이콘 코리아 2018년도 튜토리얼 세션의 "RL Adventure : DQN 부터 Rainbow DQN까지"의 발표 자료입니다.
2017년도 Deepmind에서 발표한 value based 강화학습 모형인 Rainbow의 이해를 돕기 위한 튜토리얼로 DQN부터 Rainbow까지 순차적으로 중요한 점만 요약된 내용이 들어있습니다.
파트 1 : DQN, Double & Dueling DQN - 성태경
파트 2 : PER and NoisyNet - 양홍선
파트 3 : Distributed RL - 이의령
파트 4 : RAINBOW - 김예찬
관련된 코드와 구현체를 확인하고 싶으신 분들은
https://github.com/hongdam/pycon2018-RL_Adventure
에서 확인하실 수 있습니다
【DL輪読会】Incorporating group update for speech enhancement based on convolutio...Deep Learning JP
1. The document discusses a research paper on speech enhancement using a convolutional gated recurrent network (CGRN) and ordered neuron long short-term memory (ON-LSTM).
2. The proposed method aims to improve speech quality by incorporating both time and frequency dependencies using CGRN, and handling noise with varying change rates using ON-LSTM.
3. CGRN replaces fully-connected layers with convolutions, allowing it to capture local spatial structures in the frequency domain. ON-LSTM groups neurons based on the change rate of internal information to model hierarchical representations.
파이콘 코리아 2018년도 튜토리얼 세션의 "RL Adventure : DQN 부터 Rainbow DQN까지"의 발표 자료입니다.
2017년도 Deepmind에서 발표한 value based 강화학습 모형인 Rainbow의 이해를 돕기 위한 튜토리얼로 DQN부터 Rainbow까지 순차적으로 중요한 점만 요약된 내용이 들어있습니다.
파트 1 : DQN, Double & Dueling DQN - 성태경
파트 2 : PER and NoisyNet - 양홍선
파트 3 : Distributed RL - 이의령
파트 4 : RAINBOW - 김예찬
관련된 코드와 구현체를 확인하고 싶으신 분들은
https://github.com/hongdam/pycon2018-RL_Adventure
에서 확인하실 수 있습니다
The detailed results are described at GitHub (in English):
https://github.com/jkatsuta/exp-18-1q
(maddpg/experiments/my_notes/のexp1 ~ exp6)
立教大学のセミナー資料(前篇)です。
資料後篇:
https://www.slideshare.net/JunichiroKatsuta/ss-108099542
ブログ(動画あり):
https://recruit.gmo.jp/engineer/jisedai/blog/multi-agent-reinforcement-learning/
ベイズ最適化によるハイパーパラメータ探索についてざっくりと解説しました。
今回紹介する内容の元となった論文
Bergstra, James, et al. "Algorithms for hyper-parameter optimization." 25th annual conference on neural information processing systems (NIPS 2011). Vol. 24. Neural Information Processing Systems Foundation, 2011.
https://hal.inria.fr/hal-00642998/
The detailed results are described at GitHub (in English):
https://github.com/jkatsuta/exp-18-1q
(maddpg/experiments/my_notes/のexp1 ~ exp6)
立教大学のセミナー資料(前篇)です。
資料後篇:
https://www.slideshare.net/JunichiroKatsuta/ss-108099542
ブログ(動画あり):
https://recruit.gmo.jp/engineer/jisedai/blog/multi-agent-reinforcement-learning/
ベイズ最適化によるハイパーパラメータ探索についてざっくりと解説しました。
今回紹介する内容の元となった論文
Bergstra, James, et al. "Algorithms for hyper-parameter optimization." 25th annual conference on neural information processing systems (NIPS 2011). Vol. 24. Neural Information Processing Systems Foundation, 2011.
https://hal.inria.fr/hal-00642998/
13. 導入BI的目的
• 促進企業決策流程(Facilitate the Business Decision-Making Process)
• 降低整體營運成本(Power the Bottom Line)
• 協同組織目標與行動(Achieve a Fully Coordinated Organization)