The slides are mainly proving the viability of using AI in different domains, evidenced by Tsai-Min's probing in AI-related competitions from 8 different domains within 2018.
* 疫情中,客服最常被問到的問題
** 視訊會議
** 多人共用免費的公用信箱,被鎖住了!
** 「驗證網域」是什麼?
* 你想要的是單一功能,卻需要導入一個雲端系統才可以?!
** Google Meet vs Google 非營利版
** 當我們的工作中,加入越來越多數位工具和流程:投入的成本 vs 效率的回饋
** 導入 Google 非營利版或 M365 非營利版,組織準備好了嗎?
* 微觀的需求 vs 組織整體的數位政策和想像
I used AI-based technologies to do Data De-identification and Re-identification in the 2022 Attack and Defense Contest hosted by Research Center for Information Technology Innovation, Academia Sinica.
* 疫情中,客服最常被問到的問題
** 視訊會議
** 多人共用免費的公用信箱,被鎖住了!
** 「驗證網域」是什麼?
* 你想要的是單一功能,卻需要導入一個雲端系統才可以?!
** Google Meet vs Google 非營利版
** 當我們的工作中,加入越來越多數位工具和流程:投入的成本 vs 效率的回饋
** 導入 Google 非營利版或 M365 非營利版,組織準備好了嗎?
* 微觀的需求 vs 組織整體的數位政策和想像
I used AI-based technologies to do Data De-identification and Re-identification in the 2022 Attack and Defense Contest hosted by Research Center for Information Technology Innovation, Academia Sinica.
ECG Signal Super-resolution for Heart Failure Detection陳在民 TSAI-MIN CHEN
Using a deep learning-based signal super-resolution AI model and a challenge-best AI model to detect possible heart failure from electrocardiogram in the 2022 Data Science Competition hosted by Tri-Service General Hospital.
機器學習工程師常見的錯誤操作_陳在民
Data engineering might be considered easy since there are many framework packages ready. However, it is really terrible for those engineers who don't really understand their data manipulations might not be valid in the end of putting into practices. Therefore, I gave an open discussion over the common mistakes. Instead of giving my answers, I chose to let AI engineers to vote whether the manipulations are valid. The simple results are also shown in the slides.
23. 痞客邦
黑客松
MolHac
kII線上
黑客松
人工智
慧大賽
中國生
理訊號
賽
生技醫
療創新
黑客松
全國智
慧製造
大數據
創造勝
利Fund
程式
農業創
新黑松
23
TM-NN
Input(None, 256000, 1)
Dense
Output(None, 3)
Bidirectional RNN
Convolution
Convolution-Pooling
Convolution
X5
CNN Block
Attention
1. Combination of CNN and bidirectional RNN with
Attention layer.
2. 5 CNN-blocks including 2 convolution layers that
follow the convolution-pooling layer
3. Dropout was driven by randomly dropping 20% of the
connections to the next block or layer.
4. In the last CNN-block, we connected it into a
bidirectional RNN with Attention layer, and applied the
batch normalization before it was connected to the
fully-connected layer.
5. LeakyReLU activation function was used for each
layer, except for the last fully-connected layer, where
Sigmoid activation function was used.
6. Model was trained with categorical-cross-entropy
loss function and ADAM optimizer.
7. Newly released data was applied as Validation Set to
optimize the accuracy.