RNNs are neural networks that can handle sequence data by incorporating a time component. They learn from past sequence data to predict future states in new sequence data. The document discusses RNN architecture, which uses a hidden layer that receives both the current input and the previous hidden state. It also covers backpropagation through time (BPTT) for training RNNs on sequence data. Examples are provided to implement an RNN from scratch using TensorFlow and Keras to predict a noisy sine wave time series.
Explanation on Tensorflow example -Deep mnist for expert홍배 김
you can find the exact and detailed network architecture of 'Deep mnist for expert' example of tensorflow's tutorial. I also added descriptions on the program for your better understanding.
Gentlest Introduction to Tensorflow - Part 3Khor SoonHin
Articles:
* https://medium.com/all-of-us-are-belong-to-machines/gentlest-intro-to-tensorflow-part-3-matrices-multi-feature-linear-regression-30a81ebaaa6c
* https://medium.com/all-of-us-are-belong-to-machines/gentlest-intro-to-tensorflow-4-logistic-regression-2afd0cabc54
Video: https://youtu.be/F8g_6TXKlxw
Code: https://github.com/nethsix/gentle_tensorflow
In this part, we:
* Use Tensorflow for linear regression models with multiple features
* Use Tensorflow for logistic regression models with multiple features. Specifically:
* Predict multi-class/discrete outcome
* Explain why we use cross-entropy as cost function
* Explain why we use softmax
* Tensorflow Cheatsheet #1
* Single feature linear regression
* Multi-feature linear regression
* Multi-feature logistic regression
Video: https://youtu.be/dYhrCUFN0eM
Article: https://medium.com/p/the-gentlest-introduction-to-tensorflow-248dc871a224
Code: https://github.com/nethsix/gentle_tensorflow/blob/master/code/linear_regression_one_feature.py
This alternative introduction to Google's official Tensorflow (TF) tutorial strips away the unnecessary concepts that overly complicates getting started. The goal is to use TF to perform Linear Regression (LR) that has only a single-feature. We show how to model the LR using a TF graph, how to define the cost function to measure how well the an LR model fits the dataset, and finally train the LR model to find the best fit model.
TensorFlow is a wonderful tool for rapidly implementing neural networks. In this presentation, we will learn the basics of TensorFlow and show how neural networks can be built with just a few lines of code. We will highlight some of the confusing bits of TensorFlow as a way of developing the intuition necessary to avoid common pitfalls when developing your own models. Additionally, we will discuss how to roll our own Recurrent Neural Networks. While many tutorials focus on using built in modules, this presentation will focus on writing neural networks from scratch enabling us to build flexible models when Tensorflow’s high level components can’t quite fit our needs.
About Nathan Lintz:
Nathan Lintz is a research scientist at indico Data Solutions where he is responsible for developing machine learning systems in the domains of language detection, text summarization, and emotion recognition. Outside of work, Nathan is currently writting a book on TensorFlow as an extension to his tutorial repository https://github.com/nlintz/TensorFlow-Tutorials
Link to video https://www.youtube.com/watch?v=op1QJbC2g0E&feature=youtu.be
Explanation on Tensorflow example -Deep mnist for expert홍배 김
you can find the exact and detailed network architecture of 'Deep mnist for expert' example of tensorflow's tutorial. I also added descriptions on the program for your better understanding.
Gentlest Introduction to Tensorflow - Part 3Khor SoonHin
Articles:
* https://medium.com/all-of-us-are-belong-to-machines/gentlest-intro-to-tensorflow-part-3-matrices-multi-feature-linear-regression-30a81ebaaa6c
* https://medium.com/all-of-us-are-belong-to-machines/gentlest-intro-to-tensorflow-4-logistic-regression-2afd0cabc54
Video: https://youtu.be/F8g_6TXKlxw
Code: https://github.com/nethsix/gentle_tensorflow
In this part, we:
* Use Tensorflow for linear regression models with multiple features
* Use Tensorflow for logistic regression models with multiple features. Specifically:
* Predict multi-class/discrete outcome
* Explain why we use cross-entropy as cost function
* Explain why we use softmax
* Tensorflow Cheatsheet #1
* Single feature linear regression
* Multi-feature linear regression
* Multi-feature logistic regression
Video: https://youtu.be/dYhrCUFN0eM
Article: https://medium.com/p/the-gentlest-introduction-to-tensorflow-248dc871a224
Code: https://github.com/nethsix/gentle_tensorflow/blob/master/code/linear_regression_one_feature.py
This alternative introduction to Google's official Tensorflow (TF) tutorial strips away the unnecessary concepts that overly complicates getting started. The goal is to use TF to perform Linear Regression (LR) that has only a single-feature. We show how to model the LR using a TF graph, how to define the cost function to measure how well the an LR model fits the dataset, and finally train the LR model to find the best fit model.
TensorFlow is a wonderful tool for rapidly implementing neural networks. In this presentation, we will learn the basics of TensorFlow and show how neural networks can be built with just a few lines of code. We will highlight some of the confusing bits of TensorFlow as a way of developing the intuition necessary to avoid common pitfalls when developing your own models. Additionally, we will discuss how to roll our own Recurrent Neural Networks. While many tutorials focus on using built in modules, this presentation will focus on writing neural networks from scratch enabling us to build flexible models when Tensorflow’s high level components can’t quite fit our needs.
About Nathan Lintz:
Nathan Lintz is a research scientist at indico Data Solutions where he is responsible for developing machine learning systems in the domains of language detection, text summarization, and emotion recognition. Outside of work, Nathan is currently writting a book on TensorFlow as an extension to his tutorial repository https://github.com/nlintz/TensorFlow-Tutorials
Link to video https://www.youtube.com/watch?v=op1QJbC2g0E&feature=youtu.be
Gentlest Introduction to Tensorflow - Part 2Khor SoonHin
Video: https://youtu.be/Trc52FvMLEg
Article: https://medium.com/@khor/gentlest-introduction-to-tensorflow-part-2-ed2a0a7a624f
Code: https://github.com/nethsix/gentle_tensorflow
Continuing from Part 1 where we used Tensorflow to perform linear regression for a model with single feature, here we:
* Use Tensorboard to visualize linear regression variables and the Tensorflow network graph
* Perform stochastic/mini-batch/batch gradient descent
A fast-paced introduction to TensorFlow 2 about some important new features (such as generators and the @tf.function decorator) and TF 1.x functionality that's been removed from TF 2 (yes, tf.Session() has retired).
Concise code samples are presented to illustrate how to use new features of TensorFlow 2. You'll also get a quick introduction to lazy operators (if you know FRP this will be super easy), along with a code comparison between TF 1.x/iterators with tf.data.Dataset and TF 2/generators with tf.data.Dataset.
Finally, we'll look at some tf.keras code samples that are based on TensorFlow 2. Although familiarity with TF 1.x is helpful, newcomers with an avid interest in learning about TensorFlow 2 can benefit from this session.
A fast-paced introduction to TensorFlow 2 about some important new features (such as generators and the @tf.function decorator) and TF 1.x functionality that's been removed from TF 2 (yes, tf.Session() has retired).
Some concise code samples are presented to illustrate how to use new features of TensorFlow 2.
Introduction to TensorFlow, by Machine Learning at BerkeleyTed Xiao
A workshop introducing the TensorFlow Machine Learning framework. Presented by Brenton Chu, Vice President of Machine Learning at Berkeley.
This presentation cover show to construct, train, evaluate, and visualize neural networks in TensorFlow 1.0
http://ml.berkeley.edu
An introduction to Google's AI Engine, look deeper into Artificial Networks and Machine Learning. Appreciate how our simplest neural network be codified and be used to data analytics.
this is the forth slide for machine learning workshop in Hulu. Machine learning methods are summarized in the beginning of this slide, and boosting tree is introduced then. You are commended to try boosting tree when the feature number is not too much (<1000)
Applied Digital Signal Processing 1st Edition Manolakis Solutions Manualtowojixi
Full download http://alibabadownload.com/product/applied-digital-signal-processing-1st-edition-manolakis-solutions-manual/
Applied Digital Signal Processing 1st Edition Manolakis Solutions Manual
Time Series Analysis:Basic Stochastic Signal RecoveryDaniel Cuneo
Simple case of a recovering a stochastic signal from a time series with a linear combination of nuisance signals.
Errata:
corrected error in the Gaussian fit.
corrected the JackKnife example and un-centers data.
Corrected sig fig language and rationale
removed jk calculation of mean reformatted cells
Continuous and Discrete Elementary signals,continuous and discrete unit step signals,Exponential and Ramp signals,continuous and discrete convolution time signal,Adding and subtracting two given signals,uniform random numbers between (0, 1).,random binary wave,random binary wave,robability density functions. Find mean and variance for the above
distributions
Lucio Floretta - TensorFlow and Deep Learning without a PhD - Codemotion Mila...Codemotion
With TensorFlow, deep machine learning transitions from an area of research to mainstream software engineering. In this session, we'll work together to construct and train a neural network that recognises handwritten digits. Along the way, we'll discover some of the "tricks of the trade" used in neural network design, and finally, we'll bring the recognition accuracy of our model above 99%.
Gentlest Introduction to Tensorflow - Part 2Khor SoonHin
Video: https://youtu.be/Trc52FvMLEg
Article: https://medium.com/@khor/gentlest-introduction-to-tensorflow-part-2-ed2a0a7a624f
Code: https://github.com/nethsix/gentle_tensorflow
Continuing from Part 1 where we used Tensorflow to perform linear regression for a model with single feature, here we:
* Use Tensorboard to visualize linear regression variables and the Tensorflow network graph
* Perform stochastic/mini-batch/batch gradient descent
A fast-paced introduction to TensorFlow 2 about some important new features (such as generators and the @tf.function decorator) and TF 1.x functionality that's been removed from TF 2 (yes, tf.Session() has retired).
Concise code samples are presented to illustrate how to use new features of TensorFlow 2. You'll also get a quick introduction to lazy operators (if you know FRP this will be super easy), along with a code comparison between TF 1.x/iterators with tf.data.Dataset and TF 2/generators with tf.data.Dataset.
Finally, we'll look at some tf.keras code samples that are based on TensorFlow 2. Although familiarity with TF 1.x is helpful, newcomers with an avid interest in learning about TensorFlow 2 can benefit from this session.
A fast-paced introduction to TensorFlow 2 about some important new features (such as generators and the @tf.function decorator) and TF 1.x functionality that's been removed from TF 2 (yes, tf.Session() has retired).
Some concise code samples are presented to illustrate how to use new features of TensorFlow 2.
Introduction to TensorFlow, by Machine Learning at BerkeleyTed Xiao
A workshop introducing the TensorFlow Machine Learning framework. Presented by Brenton Chu, Vice President of Machine Learning at Berkeley.
This presentation cover show to construct, train, evaluate, and visualize neural networks in TensorFlow 1.0
http://ml.berkeley.edu
An introduction to Google's AI Engine, look deeper into Artificial Networks and Machine Learning. Appreciate how our simplest neural network be codified and be used to data analytics.
this is the forth slide for machine learning workshop in Hulu. Machine learning methods are summarized in the beginning of this slide, and boosting tree is introduced then. You are commended to try boosting tree when the feature number is not too much (<1000)
Applied Digital Signal Processing 1st Edition Manolakis Solutions Manualtowojixi
Full download http://alibabadownload.com/product/applied-digital-signal-processing-1st-edition-manolakis-solutions-manual/
Applied Digital Signal Processing 1st Edition Manolakis Solutions Manual
Time Series Analysis:Basic Stochastic Signal RecoveryDaniel Cuneo
Simple case of a recovering a stochastic signal from a time series with a linear combination of nuisance signals.
Errata:
corrected error in the Gaussian fit.
corrected the JackKnife example and un-centers data.
Corrected sig fig language and rationale
removed jk calculation of mean reformatted cells
Continuous and Discrete Elementary signals,continuous and discrete unit step signals,Exponential and Ramp signals,continuous and discrete convolution time signal,Adding and subtracting two given signals,uniform random numbers between (0, 1).,random binary wave,random binary wave,robability density functions. Find mean and variance for the above
distributions
Lucio Floretta - TensorFlow and Deep Learning without a PhD - Codemotion Mila...Codemotion
With TensorFlow, deep machine learning transitions from an area of research to mainstream software engineering. In this session, we'll work together to construct and train a neural network that recognises handwritten digits. Along the way, we'll discover some of the "tricks of the trade" used in neural network design, and finally, we'll bring the recognition accuracy of our model above 99%.
This session for beginners introduces tf.data APIs for creating data pipelines by combining various "lazy operators" in tf.data, such as filter(), map(), batch(), zip(), flatmap(), take(), and so forth.
Familiarity with method chaining and TF2 is helpful (but not required). If you are comfortable with FRP, the code samples in this session will be very familiar to you.
Need help filling out the missing sections of this code- the sections.docxlauracallander
Need help filling out the missing sections of this code. the sections missing are step 6, 7, and 9.
Step 1: Load the Tox21 Dataset.
import numpy as np
np.random.seed(456)
import tensorflow as tf
tf.set_random_seed(456)
import matplotlib.pyplot as plt
import deepchem as dc
from sklearn.metrics import accuracy_score
_, (train, valid, test), _ = dc.molnet.load_tox21()
train_X, train_y, train_w = train.X, train.y, train.w
valid_X, valid_y, valid_w = valid.X, valid.y, valid.w
test_X, test_y, test_w = test.X, test.y, test.w
Step 2: Remove extra datasets.
# Remove extra tasks
train_y = train_y[:, 0]
valid_y = valid_y[:, 0]
test_y = test_y[:, 0]
train_w = train_w[:, 0]
valid_w = valid_w[:, 0]
test_w = test_w[:, 0]
Step 3: Define placeholders that accept minibatches of different sizes.
# Generate tensorflow graph
d = 1024
n_hidden = 50
learning_rate = .001
n_epochs = 10
batch_size = 100
with tf.name_scope("placeholders"):
x = tf.placeholder(tf.float32, (None, d))
y = tf.placeholder(tf.float32, (None,))
Step 4: Implement a hidden layer.
with tf.name_scope("hidden-layer"):
W = tf.Variable(tf.random_normal((d, n_hidden)))
b = tf.Variable(tf.random_normal((n_hidden,)))
x_hidden = tf.nn.relu(tf.matmul(x, W) + b)
Step 5: Complete the fully connected architecture.
with tf.name_scope("output"):
W = tf.Variable(tf.random_normal((n_hidden, 1)))
b = tf.Variable(tf.random_normal((1,)))
y_logit = tf.matmul(x_hidden, W) + b
# the sigmoid gives the class probability of 1
y_one_prob = tf.sigmoid(y_logit)
# Rounding P(y=1) will give the correct prediction.
y_pred = tf.round(y_one_prob)
with tf.name_scope("loss"):
# Compute the cross-entropy term for each datapoint
y_expand = tf.expand_dims(y, 1)
entropy = tf.nn.sigmoid_cross_entropy_with_logits(logits=y_logit, labels=y_expand)
# Sum all contributions
l = tf.reduce_sum(entropy)
with tf.name_scope("optim"):
train_op = tf.train.AdamOptimizer(learning_rate).minimize(l)
with tf.name_scope("summaries"):
tf.summary.scalar("loss", l)
merged = tf.summary.merge_all()
Step 6: Add dropout to a hidden layer.
Step 7: Define a hidden layer with dropout.
Step 8: Implement mini-batching training.
train_writer = tf.summary.FileWriter('/tmp/fcnet-tox21',
tf.get_default_graph())
N = train_X.shape[0]
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
step = 0
for epoch in range(n_epochs):
pos = 0
while pos N:
batch_X = train_X[pos:pos+batch_size]
batch_y = train_y[pos:pos+batch_size]
feed_dict = {x: batch_X, y: batch_y}
_, summary, loss = sess.run([train_op, merged, l], feed_dict=feed_dict)
print("epoch %d, step %d, loss: %f" % (epoch, step, loss))
train_writer.add_summary(summary, step)
step += 1
pos += batch_size
# Make Predictions
valid_y_pred = sess.run(y_pred, feed_dict={x: valid_X})
Step 9: Use TensorBoard to track model convergence.
include screenshots for the following:
1) a TensorBoard graph for the model, and
2) the loss curve.
.
Abstract: This PDSG workshop introduces basic concepts on TensorFlow. The course covers fundamentals. Concepts covered are Vectors/Matrices/Vectors, Design&Run, Constants, Operations, Placeholders, Bindings, Operators, Loss Function and Training.
Level: Fundamental
Requirements: Some basic programming knowledge is preferred. No prior statistics background is required.
Here, we have a simple neural network described in my slides about neural networks... It is using simple concepts from linear algebra to encapsulate the complexities (This makes possible to even use parallel matrix multiplication and some other algorithms to make everything faster) and making everything more modular and compact.
The data sets are coming from http://yann.lecun.com/exdb/mnist/.
A short list of the most useful R commands
reference: http://www.personality-project.org/r/r.commands.html
R programı ile ilgilenen veya yeni öğrenmeye başlayan herkes için hazırlanmıştır.
Representation of signals & Operation on signals
(Time Reversal, Time Shifting , Time Scaling, Amplitude scaling, Signal addition, Signal Multiplication)
Машинное обучение на JS. С чего начать и куда идти | Odessa Frontend Meetup #12OdessaFrontend
В последние годы машинное обучаение получило широчайшее распространение во всех областях деятельности человека. каждая кофеварка и пылесос, не говоря уже о web приложениях, стараются сделать нашу жизнь чуточку лучше прибегая к использованию искусственного интеллекта. нужно ли получать научную степень для того чтобы попробовать себя в этом нелегком деле и может ли простой front-end разработчик применить у себя в родном фреймворке нейронку? Влад Борш рассказывает об этом и пытается разобраться откуда стартовать.
A fast-paced introduction to TensorFlow 2 regarding some important new features (such as generators and the @tf.function decorator), along with tf.data code samples and lazy operators. We'll also delve into the key ideas underlying CNNs, RNNs, and LSTMs, followed by some Keras-based code blocks.
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on the notifications, alerts, and approval requests using Slack for Bonterra Impact Management. The solutions covered in this webinar can also be deployed for Microsoft Teams.
Interested in deploying notification automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
Search and Society: Reimagining Information Access for Radical FuturesBhaskar Mitra
The field of Information retrieval (IR) is currently undergoing a transformative shift, at least partly due to the emerging applications of generative AI to information access. In this talk, we will deliberate on the sociotechnical implications of generative AI for information access. We will argue that there is both a critical necessity and an exciting opportunity for the IR community to re-center our research agendas on societal needs while dismantling the artificial separation between the work on fairness, accountability, transparency, and ethics in IR and the rest of IR research. Instead of adopting a reactionary strategy of trying to mitigate potential social harms from emerging technologies, the community should aim to proactively set the research agenda for the kinds of systems we should build inspired by diverse explicitly stated sociotechnical imaginaries. The sociotechnical imaginaries that underpin the design and development of information access technologies needs to be explicitly articulated, and we need to develop theories of change in context of these diverse perspectives. Our guiding future imaginaries must be informed by other academic fields, such as democratic theory and critical theory, and should be co-developed with social science scholars, legal scholars, civil rights and social justice activists, and artists, among others.
JMeter webinar - integration with InfluxDB and GrafanaRTTS
Watch this recorded webinar about real-time monitoring of application performance. See how to integrate Apache JMeter, the open-source leader in performance testing, with InfluxDB, the open-source time-series database, and Grafana, the open-source analytics and visualization application.
In this webinar, we will review the benefits of leveraging InfluxDB and Grafana when executing load tests and demonstrate how these tools are used to visualize performance metrics.
Length: 30 minutes
Session Overview
-------------------------------------------
During this webinar, we will cover the following topics while demonstrating the integrations of JMeter, InfluxDB and Grafana:
- What out-of-the-box solutions are available for real-time monitoring JMeter tests?
- What are the benefits of integrating InfluxDB and Grafana into the load testing stack?
- Which features are provided by Grafana?
- Demonstration of InfluxDB and Grafana using a practice web application
To view the webinar recording, go to:
https://www.rttsweb.com/jmeter-integration-webinar
3. 일반 학습 데이터 - 하나의 벡터
시계열 데이터 - 데이터 벡터군
x(n)이 t개 입력 데이터 : x(0), x(1), x(2), x(3), … x(t)
ex) 2016년 ~ 2018년 월별 전국 기온 데이터
하나의 입력 데이터 : x(n)
ex) mnist 이미지
4. 과거의 시계열 데이터를 학습해서,
미지의 새로운 시계열 데이터가 주어졌을 때
미래상태 예측
RNN
7. 순환신경망(RNN) 구조
……
……
……
…… U V
W
input layer
x(t)
output layer
y(t)
hidden layer
h(t)
previous hidden layer
h(t-1)
시간 t인 시점에 주어지는 입력값 x(t)
+
저장되어 있던 t-1 시점의 hidden layer
t 시점의 hidden layer
8. h(t) = f(Ux(t) + Wh(t-1) + b)
y(t) = g(Vh(t) + c)
hidden layer formula
output layer formula
activation function
activation function
bias
bias
9. p(t) = Ux(t) + Wh(t-1) + b q(t) = Vh(t) + c
hidden layer value output layer value
error function E = E(U, V, W, b, c)
let,
eh(t) =
∂E
∂p(t)
eo(t) =
∂E
∂q(t)
Error term Error term
10. eh(t) =
∂E
∂p(t)
∂E
∂U
=
eo(t) =
∂E
∂q(t)
∂E
∂V
=
Error term of Hidden Layer
Error term of Output Layer
∂E
∂W
=
∂E
∂b
=
∂E
∂c
=
∂E
∂p(t)
∂E
∂q(t)
∂E
∂p(t)
∂E
∂p(t)
∂E
∂q(t)
∂p(t)
∂U
T
∂q(t)
∂V
T
∂p(t)
∂W
T
∂p(t)
∂b
∂q(t)
∂c
⊙
⊙
= eh(t)x(t)
T
= eo(t)h(t)
T
= eh(t)h(t-1)
T
= eh(t)
= eo(t)
11. y(t) = g(Vh(t) + c)output layer formula
In CNN
activation function g(*) - sigmoid function or softmax function or etc…
And output value is probability (number) value.
In RNN
output value is a function like this, g(x) = x
∴ y(t) = Vh(t) + c
E= ∑ y(t) - t(t)1
2
2
t=1
T
Squared error function
12. ……
……
V
……
U
input layer
x(t)
output layer
y(t)
hidden layer
h(t)
……
W
previous hidden layer
h(t-2)
……
U
input layer
x(t-2)
……
W
previous hidden layer
h(t-1)
……
U
input layer
x(t-1)
……
W
previous hidden layer
h(t-3)
BPTT (Backpropagation Through Time)
13. BPTT (Backpropagation Through Time)
eh(t-1) =
∂E
∂p(t-1)
eh(t-1) =
∂E
∂p(t)
⊙
∂p(t)
∂p(t-1)
= eh(t) ⊙
∂p(t)
∂h(t-1)
∂h(t-1)
∂p(t-1)
= eh(t) ⊙Wf(p(t-1))’
결국 eh(t-1) 를 eh(t) 식으로 나타내는 것.
16. 1. Prepare DATA
def sin(x, T=100):
return np.sin(2.0 * np.pi * x / T)
def toy_problem(T=100, ampl=0.05):
x = np.arange(0, 2 * T + 1)
noise = ampl * np.random.uniform(low=-1.0, high=1.0, size=len(x))
return sin(x) + noise
노이즈가 첨가된 사인파 데이터를 생성함수
17. 1. Prepare DATA
데이터를 생성
T = 100
f = toy_problem(T)
length_of_sequences = 2 * T # 시계열 전체의 길이
maxlen = 25 # 시계열 데이터 하나의 길이
data = []
target = []
for i in range(0, length_of_sequences - maxlen + 1):
data.append(f[i: i + maxlen])
target.append(f[i + maxlen])
X = np.array(data).reshape(len(data), maxlen, 1)
Y = np.array(target).reshape(len(data), 1)
# 데이터 설정
N_train = int(len(data) * 0.9)
N_validation = len(data) - N_train
X_train, X_validation, Y_train, Y_validation = train_test_split(X, Y, test_size=N_validation)
18. 2. with Tensorflow
모델 설정
n_in = len(X[0][0]) # 1
n_hidden = 20
n_out = len(Y[0]) # 1
x = tf.placeholder(tf.float32, shape=[None, maxlen, n_in])
t = tf.placeholder(tf.float32, shape=[None, n_out])
n_batch = tf.placeholder(tf.int32)
y = inference(x, n_batch, maxlen=maxlen, n_hidden=n_hidden, n_out=n_out)
loss = loss(y, t)
train_step = training(loss)
early_stopping = EarlyStopping(patience=10, verbose=1)
history = {
'val_loss': []
}
19. 2. with Tensorflow
모델 설정 본체
def inference(x, n_batch, maxlen=None, n_hidden=None, n_out=None):
def weight_variable(shape):
initial = tf.truncated_normal(shape, stddev=0.01)
return tf.Variable(initial)
def bias_variable(shape):
initial = tf.zeros(shape, dtype=tf.float32)
return tf.Variable(initial)
cell = tf.contrib.rnn.BasicRNNCell(n_hidden)
initial_state = cell.zero_state(n_batch, tf.float32)
state = initial_state
outputs = [] # 과거의 은닉층에서 나온 출력을 저장한다
with tf.variable_scope('RNN'):
for t in range(maxlen):
if t > 0:
tf.get_variable_scope().reuse_variables()
(cell_output, state) = cell(x[:, t, :], state)
outputs.append(cell_output)
output = outputs[-1]
V = weight_variable([n_hidden, n_out])
c = bias_variable([n_out])
y = tf.matmul(output, V) + c # 선형활성
return y
def training(loss):
optimizer = tf.train.AdamOptimizer(learning_rate=0.001,
beta1=0.9,
beta2=0.999)
train_step = optimizer.minimize(loss)
return train_step
def loss(y, t):
mse = tf.reduce_mean(tf.square(y - t))
return mse
20. 2. with Tensorflow
모델 학습
epochs = 500
batch_size = 10
init = tf.global_variables_initializer()
sess = tf.Session()
sess.run(init)
n_batches = N_train // batch_size
for epoch in range(epochs):
X_, Y_ = shuffle(X_train, Y_train)
for i in range(n_batches):
start = i * batch_size
end = start + batch_size
sess.run(train_step, feed_dict={
x: X_[start:end],
t: Y_[start:end],
n_batch: batch_size
})
# 검증 데이터를 사용해서 평가한다
val_loss = loss.eval(session=sess, feed_dict={
x: X_validation,
t: Y_validation,
n_batch: N_validation
})
history['val_loss'].append(val_loss)
print('epoch:', epoch,
' validation loss:', val_loss)
# Early Stopping 검사
if early_stopping.validate(val_loss):
break
21. 2. with Tensorflow
예측
truncate = maxlen
Z = X[:1] # 본래 데이터의 첫머리의 일부분만 잘라낸다
original = [f[i] for i in range(maxlen)]
predicted = [None for i in range(maxlen)]
for i in range(length_of_sequences - maxlen + 1):
# 마지막 시계열 데이터로 미래를 예측한다
z_ = Z[-1:]
y_ = y.eval(session=sess, feed_dict={
x: Z[-1:],
n_batch: 1
})
# 예측 결과를 사용해서 새로운 시계열 데이터를 생성한다
sequence_ = np.concatenate(
(z_.reshape(maxlen, n_in)[1:], y_), axis=0)
.reshape(1, maxlen, n_in)
Z = np.append(Z, sequence_, axis=0)
predicted.append(y_.reshape(-1))
23. 3. with keras
모델 설정
n_in = len(X[0][0]) # 1
n_hidden = 20
n_out = len(Y[0]) # 1
def weight_variable(shape, name=None):
return np.random.normal(scale=.01, size=shape)
early_stopping = EarlyStopping(monitor='val_loss', patience=10, verbose=1)
model = Sequential()
model.add(SimpleRNN(n_hidden,
kernel_initializer=weight_variable,
input_shape=(maxlen, n_in)))
model.add(Dense(n_out, kernel_initializer=weight_variable))
model.add(Activation('linear'))
optimizer = Adam(lr=0.001, beta_1=0.9, beta_2=0.999)
model.compile(loss='mean_squared_error',
optimizer=optimizer)
24. 3. with keras
모델 학습
epochs = 500
batch_size = 10
model.fit(X_train, Y_train,
batch_size=batch_size,
epochs=epochs,
validation_data=(X_validation, Y_validation),
callbacks=[early_stopping])
25. 3. with keras
예측
truncate = maxlen
Z = X[:1] # 본래 데이터의 첫머리의 일부분만을 잘라낸다
original = [f[i] for i in range(maxlen)]
predicted = [None for i in range(maxlen)]
for i in range(length_of_sequences - maxlen + 1):
z_ = Z[-1:]
y_ = model.predict(z_)
sequence_ = np.concatenate(
(z_.reshape(maxlen, n_in)[1:], y_),
axis=0).reshape(1, maxlen, n_in)
Z = np.append(Z, sequence_, axis=0)
predicted.append(y_.reshape(-1))
26. 3. with keras
그래프
plt.rc('font', family='serif')
plt.figure()
plt.ylim([-1.5, 1.5])
plt.plot(toy_problem(T, ampl=0), color='blue')
plt.plot(original, color='red')
plt.plot(predicted, color='black')
plt.show()