This document provides a tutorial on spike sorting using the wave_clus graphical user interface. It outlines the spike sorting method which involves spike detection using amplitude thresholding, feature extraction using wavelets, and sorting using superparamagnetic clustering. The tutorial walks through loading simulated data into wave_clus, exploring clustering and parameter changes, and provides guidance on sorting real neural data recorded from epilepsy patients. The goal is to demonstrate the wave_clus software and spike sorting workflow to automatically detect and separate spikes from different neurons.
Cubic curves are commonly used in graphics because curves of lower order commonly have too little flexibility, while curves of higher order are usually considered unnecessarily complex and make it easy to introduce undesired wiggles.
This is the slideshow for a presentation I gave as part of my graduate coursework at the Institute for Innovation and Public Purpose at University College London (UCL IIPP). Drawing on the work of IIPP professors including Carlota Perez (techno-economic paradigms), Mariana Mazzucato (“The Entrepreneurial State”), and Tim O’Reilly, I evaluate the innovation trajectory of Deep Neural Networks as a method of machine learning. I trace the history of machine learning to its present-day and conclude that while Deep Neural Networks have not yet reached technological maturity, they are already starting to encounter barriers to exponential growth and innovation. These slides were designed to be read independently from the spoken portion. If you found this useful or interesting, please message me on LinkedIn! - Justin Beirold
Cubic curves are commonly used in graphics because curves of lower order commonly have too little flexibility, while curves of higher order are usually considered unnecessarily complex and make it easy to introduce undesired wiggles.
This is the slideshow for a presentation I gave as part of my graduate coursework at the Institute for Innovation and Public Purpose at University College London (UCL IIPP). Drawing on the work of IIPP professors including Carlota Perez (techno-economic paradigms), Mariana Mazzucato (“The Entrepreneurial State”), and Tim O’Reilly, I evaluate the innovation trajectory of Deep Neural Networks as a method of machine learning. I trace the history of machine learning to its present-day and conclude that while Deep Neural Networks have not yet reached technological maturity, they are already starting to encounter barriers to exponential growth and innovation. These slides were designed to be read independently from the spoken portion. If you found this useful or interesting, please message me on LinkedIn! - Justin Beirold
Machine Learning on Your Hand - Introduction to Tensorflow Lite PreviewModulabs
TF Dev Summit × Modulabs : Learn by Run !
Machine Learning on Your Hand - Introduction to Tensorflow Lite Preview (발표자 : 강재욱)
※ 모두의연구소 페이지 : https://www.facebook.com/lab4all/
※ 모두의연구소 커뮤니티 그룹 : https://www.facebook.com/groups/modulabs
GANs are the new hottest topic in the ML arena; however, they present a challenge for the researchers and the engineers alike. Their design, and most importantly, the code implementation has been causing headaches to the ML practitioners, especially when moving to production.
Starting from the very basic of what a GAN is, passing trough Tensorflow implementation, using the most cutting-edge APIs available in the framework, and finally, production-ready serving at scale using Google Cloud ML Engine.
Slides for the talk: https://www.pycon.it/conference/talks/deep-diving-into-gans-form-theory-to-production
Github repo: https://github.com/zurutech/gans-from-theory-to-production
Usage of AI and machine learning models is likely to become more commonplace as larger swaths of the economy embrace automation and data-driven decision-making. While these predictive systems can be quite accurate, they have been treated as inscrutable black boxes in the past, that produce only numeric predictions with no accompanying explanations. Unfortunately, recent studies and recent events have drawn attention to mathematical and sociological flaws in prominent weak AI and ML systems, but practitioners usually don’t have the right tools to pry open machine learning black-boxes and debug them.
This presentation introduces several new approaches to that increase transparency, accountability, and trustworthiness in machine learning models. If you are a data scientist or analyst and you want to explain a machine learning model to your customers or managers (or if you have concerns about documentation, validation, or regulatory requirements), then this presentation is for you!
UMAP is a technique for dimensionality reduction that was proposed 2 years ago that quickly gained widespread usage for dimensionality reduction.
In this presentation I will try to demistyfy UMAP by comparing it to tSNE. I also sketch its theoretical background in topology and fuzzy sets.
PR-231: A Simple Framework for Contrastive Learning of Visual RepresentationsJinwon Lee
TensorFlow Korea 논문읽기모임 PR12 231번째 논문 review 입니다
이번 논문은 Google Brain에서 나온 A Simple Framework for Contrastive Learning of Visual Representations입니다. Geoffrey Hinton님이 마지막 저자이시기도 해서 최근에 더 주목을 받고 있는 논문입니다.
이 논문은 최근에 굉장히 핫한 topic인 contrastive learning을 이용한 self-supervised learning쪽 논문으로 supervised learning으로 학습한 ResNet50와 동일한 성능을 얻을 수 있는 unsupervised pre-trainig 방법을 제안하였습니다. Data augmentation, Non-linear projection head, large batch size, longer training, NTXent loss 등을 활용하여 훌륭한 representation learning이 가능함을 보여주었고, semi-supervised learning이나 transfer learning에서도 매우 뛰어난 결과를 보여주었습니다. 자세한 내용은 영상을 참고해주세요
논문링크: https://arxiv.org/abs/2002.05709
영상링크: https://youtu.be/FWhM3juUM6s
Using synthetic data for computer vision model trainingUnity Technologies
During this webinar Unity’s computer vision team provides an overview of computer vision, walks through current real-world data workflows, and explains why companies are moving toward synthetically generated data as an alternate data source for model training.
Watch the webinar: https://resources.unity.com/ai-ml/cv-webinar-dec-2021
Deep Learning With Python | Deep Learning And Neural Networks | Deep Learning...Simplilearn
This presentation about Deep Learning with Python will help you understand what is deep learning, applications of deep learning, what is a neural network, biological versus artificial neural networks, introduction to TensorFlow, activation function, cost function, how neural networks work, and what gradient descent is. Deep learning is a technology that is used to achieve machine learning through neural networks. We will also look into how neural networks can help achieve the capability of a machine to mimic human behavior. We'll also implement a neural network manually. Finally, we'll code a neural network in Python using TensorFlow.
Below topics are explained in this Deep Learning with Python presentation:
1. What is Deep Learning
2. Biological versus Artificial Intelligence
3. What is a Neural Network
4. Activation function
5. Cost function
6. How do Neural Networks work
7. How do Neural Networks learn
8. Implementing the Neural Network
9. Gradient descent
10. Deep Learning platforms
11. Introduction to TensoFlow
12. Implementation in TensorFlow
You can gain in-depth knowledge of Deep Learning by taking our Deep Learning certification training course. With Simplilearn’s Deep Learning course, you will prepare for a career as a Deep Learning engineer as you master concepts and techniques including supervised and unsupervised learning, mathematical and heuristic aspects, and hands-on modeling to develop algorithms. Those who complete the course will be able to:
1. Understand the concepts of TensorFlow, its main functions, operations, and the execution pipeline
2. Implement deep learning algorithms, understand neural networks and traverse the layers of data abstraction which will empower you to understand data like never before
3. Master and comprehend advanced topics such as convolutional neural networks, recurrent neural networks, training deep networks and high-level interfaces
4. Build deep learning models in TensorFlow and interpret the results
5. Understand the language and fundamental concepts of artificial neural networks
6. Troubleshoot and improve deep learning models
7. Build your own deep learning project
8. Differentiate between machine learning, deep learning, and artificial intelligence
There is booming demand for skilled deep learning engineers across a wide range of industries, making this deep learning course with TensorFlow training well-suited for professionals at the intermediate to advanced level of experience. We recommend this deep learning online course particularly for the following professionals:
1. Software engineers
2. Data scientists
3. Data analysts
4. Statisticians with an interest in deep learning
Learn more at https://www.simplilearn.com/deep-learning-course-with-tensorflow-training
Tong is a data scientist in Supstat Inc and also a master students of Data Mining. He has been an active R programmer and developer for 5 years. He is the author of the R package of XGBoost, one of the most popular and contest-winning tools on kaggle.com nowadays.
Agenda:
Introduction of Xgboost
Real World Application
Model Specification
Parameter Introduction
Advanced Features
Kaggle Winning Solution
Basic of Decision Tree Learning. This slide includes definition of decision tree, basic example, basic construction of a decision tree, mathlab example
Wireless neural recording systems are subject to stringent power consumption constraints to support long-term recordings and to allow for implantation inside the brain. In this paper, we propose using a combination of on-chip detection of action potentials (“spikes”) and compressive sensing (CS) techniques to reduce the power consumption of the neural recording system by reducing the power required for wireless transmission. We empirically verify that spikes are compressible in the wavelet domain and show that spikes from different neurons acquired from the same electrode have subtly different sparsity patterns or supports. We exploit the latter fact to further enhance the sparsity by incorporating a union of these supports learned over time into the spike recovery procedure. We show, using extracellular recordings from human subjects, that this mechanism improves the SNDR of the recovered spikes over conventional basis pursuit recovery by up to 9.5 dB (6 dB mean) for the same number of CS measurements. Though the compression ratio in our system is contingent on the spike rate at the electrode, for the datasets considered here, the mean ratio achieved for 20-dB SNDR recovery is improved from 26:1 to 43:1 using the learned union of supports.
http://nesl.ee.ucla.edu/document/show/364
Machine Learning on Your Hand - Introduction to Tensorflow Lite PreviewModulabs
TF Dev Summit × Modulabs : Learn by Run !
Machine Learning on Your Hand - Introduction to Tensorflow Lite Preview (발표자 : 강재욱)
※ 모두의연구소 페이지 : https://www.facebook.com/lab4all/
※ 모두의연구소 커뮤니티 그룹 : https://www.facebook.com/groups/modulabs
GANs are the new hottest topic in the ML arena; however, they present a challenge for the researchers and the engineers alike. Their design, and most importantly, the code implementation has been causing headaches to the ML practitioners, especially when moving to production.
Starting from the very basic of what a GAN is, passing trough Tensorflow implementation, using the most cutting-edge APIs available in the framework, and finally, production-ready serving at scale using Google Cloud ML Engine.
Slides for the talk: https://www.pycon.it/conference/talks/deep-diving-into-gans-form-theory-to-production
Github repo: https://github.com/zurutech/gans-from-theory-to-production
Usage of AI and machine learning models is likely to become more commonplace as larger swaths of the economy embrace automation and data-driven decision-making. While these predictive systems can be quite accurate, they have been treated as inscrutable black boxes in the past, that produce only numeric predictions with no accompanying explanations. Unfortunately, recent studies and recent events have drawn attention to mathematical and sociological flaws in prominent weak AI and ML systems, but practitioners usually don’t have the right tools to pry open machine learning black-boxes and debug them.
This presentation introduces several new approaches to that increase transparency, accountability, and trustworthiness in machine learning models. If you are a data scientist or analyst and you want to explain a machine learning model to your customers or managers (or if you have concerns about documentation, validation, or regulatory requirements), then this presentation is for you!
UMAP is a technique for dimensionality reduction that was proposed 2 years ago that quickly gained widespread usage for dimensionality reduction.
In this presentation I will try to demistyfy UMAP by comparing it to tSNE. I also sketch its theoretical background in topology and fuzzy sets.
PR-231: A Simple Framework for Contrastive Learning of Visual RepresentationsJinwon Lee
TensorFlow Korea 논문읽기모임 PR12 231번째 논문 review 입니다
이번 논문은 Google Brain에서 나온 A Simple Framework for Contrastive Learning of Visual Representations입니다. Geoffrey Hinton님이 마지막 저자이시기도 해서 최근에 더 주목을 받고 있는 논문입니다.
이 논문은 최근에 굉장히 핫한 topic인 contrastive learning을 이용한 self-supervised learning쪽 논문으로 supervised learning으로 학습한 ResNet50와 동일한 성능을 얻을 수 있는 unsupervised pre-trainig 방법을 제안하였습니다. Data augmentation, Non-linear projection head, large batch size, longer training, NTXent loss 등을 활용하여 훌륭한 representation learning이 가능함을 보여주었고, semi-supervised learning이나 transfer learning에서도 매우 뛰어난 결과를 보여주었습니다. 자세한 내용은 영상을 참고해주세요
논문링크: https://arxiv.org/abs/2002.05709
영상링크: https://youtu.be/FWhM3juUM6s
Using synthetic data for computer vision model trainingUnity Technologies
During this webinar Unity’s computer vision team provides an overview of computer vision, walks through current real-world data workflows, and explains why companies are moving toward synthetically generated data as an alternate data source for model training.
Watch the webinar: https://resources.unity.com/ai-ml/cv-webinar-dec-2021
Deep Learning With Python | Deep Learning And Neural Networks | Deep Learning...Simplilearn
This presentation about Deep Learning with Python will help you understand what is deep learning, applications of deep learning, what is a neural network, biological versus artificial neural networks, introduction to TensorFlow, activation function, cost function, how neural networks work, and what gradient descent is. Deep learning is a technology that is used to achieve machine learning through neural networks. We will also look into how neural networks can help achieve the capability of a machine to mimic human behavior. We'll also implement a neural network manually. Finally, we'll code a neural network in Python using TensorFlow.
Below topics are explained in this Deep Learning with Python presentation:
1. What is Deep Learning
2. Biological versus Artificial Intelligence
3. What is a Neural Network
4. Activation function
5. Cost function
6. How do Neural Networks work
7. How do Neural Networks learn
8. Implementing the Neural Network
9. Gradient descent
10. Deep Learning platforms
11. Introduction to TensoFlow
12. Implementation in TensorFlow
You can gain in-depth knowledge of Deep Learning by taking our Deep Learning certification training course. With Simplilearn’s Deep Learning course, you will prepare for a career as a Deep Learning engineer as you master concepts and techniques including supervised and unsupervised learning, mathematical and heuristic aspects, and hands-on modeling to develop algorithms. Those who complete the course will be able to:
1. Understand the concepts of TensorFlow, its main functions, operations, and the execution pipeline
2. Implement deep learning algorithms, understand neural networks and traverse the layers of data abstraction which will empower you to understand data like never before
3. Master and comprehend advanced topics such as convolutional neural networks, recurrent neural networks, training deep networks and high-level interfaces
4. Build deep learning models in TensorFlow and interpret the results
5. Understand the language and fundamental concepts of artificial neural networks
6. Troubleshoot and improve deep learning models
7. Build your own deep learning project
8. Differentiate between machine learning, deep learning, and artificial intelligence
There is booming demand for skilled deep learning engineers across a wide range of industries, making this deep learning course with TensorFlow training well-suited for professionals at the intermediate to advanced level of experience. We recommend this deep learning online course particularly for the following professionals:
1. Software engineers
2. Data scientists
3. Data analysts
4. Statisticians with an interest in deep learning
Learn more at https://www.simplilearn.com/deep-learning-course-with-tensorflow-training
Tong is a data scientist in Supstat Inc and also a master students of Data Mining. He has been an active R programmer and developer for 5 years. He is the author of the R package of XGBoost, one of the most popular and contest-winning tools on kaggle.com nowadays.
Agenda:
Introduction of Xgboost
Real World Application
Model Specification
Parameter Introduction
Advanced Features
Kaggle Winning Solution
Basic of Decision Tree Learning. This slide includes definition of decision tree, basic example, basic construction of a decision tree, mathlab example
Wireless neural recording systems are subject to stringent power consumption constraints to support long-term recordings and to allow for implantation inside the brain. In this paper, we propose using a combination of on-chip detection of action potentials (“spikes”) and compressive sensing (CS) techniques to reduce the power consumption of the neural recording system by reducing the power required for wireless transmission. We empirically verify that spikes are compressible in the wavelet domain and show that spikes from different neurons acquired from the same electrode have subtly different sparsity patterns or supports. We exploit the latter fact to further enhance the sparsity by incorporating a union of these supports learned over time into the spike recovery procedure. We show, using extracellular recordings from human subjects, that this mechanism improves the SNDR of the recovered spikes over conventional basis pursuit recovery by up to 9.5 dB (6 dB mean) for the same number of CS measurements. Though the compression ratio in our system is contingent on the spike rate at the electrode, for the datasets considered here, the mean ratio achieved for 20-dB SNDR recovery is improved from 26:1 to 43:1 using the learned union of supports.
http://nesl.ee.ucla.edu/document/show/364
https://github.com/telecombcn-dl/dlmm-2017-dcu
Deep learning technologies are at the core of the current revolution in artificial intelligence for multimedia data analysis. The convergence of big annotated data and affordable GPU hardware has allowed the training of neural networks for data analysis tasks which had been addressed until now with hand-crafted features. Architectures such as convolutional neural networks, recurrent neural networks and Q-nets for reinforcement learning have shaped a brand new scenario in signal processing. This course will cover the basic principles and applications of deep learning to computer vision problems, such as image classification, object detection or text captioning.
OverviewUsing the C-struct feature, design, implement and .docxalfred4lewis58146
Overview
Using the C-struct feature, design, implement and test a new (programmer-defined) data type that can be used to represent and manipulate a collection of sorted integers. Such a data type can be used, for instance, by an instructor to process test scores (assuming test scores are recorded as integral values); each instance (object) of the data type would be able to represent a group of related test scores in that case. The operations supported should include the following:
Checking to see if the collection is empty.
NOTE: A collection is empty if it doesn't contain any values.
Finding the number of values in the collection.
Adding a new value to the collection.
NOTE:
You should NOT simply append the new item to the end of the array and then use a sorting algorithm of some kind to sort the array. You also should NOT use any temporary arrays to perfom the insertion.
TIP:
Adopt/adapt the StoreOrdered function of Assignment 4.
Finding the collection's lowest/highest value.
NOTE:
The lowest/highest value is undefined if the collection is empty.
Finding the value at a given position in the collection, with position starting at 1 (i.e., array index 0 corresponds to position 1, array index 1 corresponds to position 2, and so on).
NOTE:
A positional value is undefined if the collection is empty.
Finding the collection's average.
NOTE:
The average is undefined if the collection is empty.
Removing an occurrence (if exists) of a specified value from the collection.
Finding the number of occurences of a specified value in the collection.
Resetting the collection to an empty collection.
Adding a given collection into another collection.
NOTE:
The given collection (addend) and the collection to be added to (benend) may be the same collection before the operation takes place (in which case the operation will result in "each of the items in the original collection is duplicated exactly once").
TIP:
The "adding a new value to collection" function (assuming implemented correctly) should prove useful here.
Combining two given collections into a third one (which is a new collection to be returned).
TIP:
The "adding a new value to collection" function (assuming implemented correctly) should prove useful here.
Testing to see if two given collections are identical.
NOTE:
Two collections are identical if they contain the same number of items and the values contained in every corresponding pair of items are equal.
Some Specifics
You will use a fixed-sized, compile-time array (and other supporting data members) to implement the new data type that can be used to declare variables (objects) each of which can represent a collection of up to MAX_SIZE integers. For the purpose of testing, set MAX_SISE to 10; your design and implementation of the data type, however, should enable the maximum size to be easily modified, i.e., only need to change the value that MAX_SIZE is set to.
Goal
To gai.
Kaggle Otto Challenge: How we achieved 85th out of 3,514 and what we learntEugene Yan Ziyou
Our team achieved 85th position out of 3,514 at the very popular Kaggle Otto Product Classification Challenge. Here's an overview of how we did it, as well as some techniques we learnt from fellow Kagglers during and after the competition.
Machine Learning Laboratory set of experiments, including ANN, Backpropagation, K-Means, Hierarchical Clustering, Linear Regression, Multivariate Regression, Fuzzy Logic.
Yeah, i'm too lazy to open up adobe...
enjoy!
(I hear NSHS has a record number of students attending this year!)
[which means that ass kicking is mandatory]
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on the notifications, alerts, and approval requests using Slack for Bonterra Impact Management. The solutions covered in this webinar can also be deployed for Microsoft Teams.
Interested in deploying notification automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
3. Goals:
• Algorithm for automatic detection and sorting of
spikes.
• Suitable for on-line analysis.
• Improve both detection and sorting in comparison
with previous approaches.
Outline of the method:
I - Spike detection: amplitude threshold.
II - Feature extraction: wavelets.
III - Sorting: Superparamagnetic clustering.
4. This tutorial will show you how to
do spike sorting using:
The wave_ clus graphic user
interface.
The batch files Get_spikes and
Do_clustering.
5. Getting started…
• Add the directory wave_clus with subfolders in your
matlab path (using the matlab File/Set Path menu)
• Type wave_clus in matlab to call the GUI.
• Choose DataType simulator and load the file
C_Easy1_noise01_short (in the subdir
wave_clus/Sample_data/Simulator) using the Load
button.
7. Now you are ready to start playing with wave_ clus …
• This is a 10 sec. segment of simulated data.
• First, choose the option plot_average to plot the average spike shapes (+/- 1
std). Then choose to plot the spike features.
• There may be some spikes unassigned in cluster 0. Go back to plot_all and use
the Force button to assign them to any of the clusters. Better?
• Now change the temperature. At t=0 you will get a single cluster, for large t’s
you may get many clusters (if the parameter min_clus allows it).
• Save the results using the Save clusters button. Load the output file
times_C_Easy1_noise01.mat. Cluster membership is saved in the first column
of the variable cluster_class. The second column gives the spike times.
• You can also change the isi histogram plots using the max and step options.
• Finally check the parameters used in the Set_parameters_simulation file in
the wave_clus/Parametes_files (just type ‘open set_parameters_simulation’ in
matlab).
8. Playing with the spike features…
• Load the file C_Difficult1_noise015
using again the DataType: Simulator.
• Use the Spike features option
10. Seeing the clusters…
• You may, however, get something different cause SPC
is a stochastic clustering method. If you don’t get the
3 clusters, you may have to change the temperature.
• The are 3 different spike shapes, but you don’t see
three clear clusters. That’s because wave_clus plots in
the main window the first 2 wavelet coefficients.
• You can see the rest of the projections by clicking the
Plot all projections button
11. It would look like this…
Clusters separate clearly
in some projections
12. Using Principal ComponentAnalysis
• Now do the same using PCA. Open set_parameters_simulation
and select features = ‘pca’ instead of features = ‘wav’ (don’t
forget to set it back to ‘wav’ when you are done!).
• Load again the data C_Difficult1_noise015.
14. Why PCA does so bad here?
• As you see, there’s now only one single cluster (and for no temperature
you can split it into 3!). You have just replicated the results of Fig. 8 of
the Neural Computation paper (see reference at the end).
• In this dataset the spike shapes are very similar, and their differences
are localized in time. Do to its excellent time-frequency resolution,
wavelets does much better.
• Also, don’t forget that PCA looks for directions of maximum variance,
which are not necessarily the ones offering the best separation
between the clusters. Wavelets combined with the KS test (see paper)
looks for the coefficients with multimodal distribution, which are very
likely the ones offering the best separation between the clusters.
• As a summary, in the Neural Computation paper it is shown for several
different examples of simulated data a better performance of wavelets
in comparison to PCA.
15. You are now ready for real data!
• You will now load a ~30’ multiunit recording from a human epilepsy
patient. The data was collected at Itzhak Fried’s lab at UCLA.
• Intracranial recordings in these patients (refractory to medication) are
done for clinical reasons in order to evaluate the feasibility of epilepsy
surgery.
• Load the file CSC4 using the DataType: CSC (pre-clustered). Using the
(pre-clustered) option you will load data that has already been
clustered using the batch file Do_clustering_CSC. If you want to start
from scratch use the CSC option.
• Check the settings in the Set_parameters_CSC file. If you have a
Neuralynx system you can already use the CSC and Sc options for your
own data.
• BTW, there should be a few cool publications coming up using these
human data. If you’re interested check
www.vis.caltech.edu/~rodri/publications in the near future or email me.
17. Playing with it…
• Again, you can change the temperature, force the
clustering, see the spike features, etc. Remember
that everything is much faster is you use
Plot_average instead of Plot_all.
• You can also zoom into the data using the Tools menu.
• You may also want to fix a given cluster by using the
fix button. This option is useful for choosing clusters
at different temperatures or for not forcing all the
clusters together.
18. One further example:
• Sometimes clusters appear at different
temperatures
• In the following example we give a step-
by-step example of a clustering
procedure using the fix button
19. Step 1: Fix cluster 2 at low T Step 3: Check features
Step 2: Change to T2 Step 4: Fix clusters 2 and 3
20. Step 5: Change to T3 Step 6: Re-check features
Step 7: Push the Force button
This is how the final clustering
looks like!
Note that after forcing the
green cluster is not as clean as
before.
21. Clustering your own data…
• Most likely you’ll end up using the ASCII DataType option for your data.
• If you have continuous data, it should be stored as a single vector in a
variable data, which is saved in a .mat file. Look for the file test.mat for an
example. This data should be loaded using the ASCII option or the ASCII
(pre-clustered) if you have already clustered it with the Do_clustering
batch file.
• If you have spikes that have already been detected, you should use the
ASCII spikes option. The spikes should be stored in a matrix named spikes
in a .mat file. The file test1_spikes.mat gives an example of the format.
• You can set the optimal parameters for you data in the corresponding
Set_parameters_ascii (or ascii_spikes) file. Most important, don’t forget to
set the sampling rate sr!
• Important note: To save computational time, if you have more than 30000
spikes in your dataset, by default these will be assigned by template
matching with the batch clustering code (this can be changed in the
set_parameters file). With the GUI, they will stay in cluster 0 and they
should be assigned to the other clusters using the Force button. Note that
22. Using the batch files…
• There are two main batch files: Get_spikes (for spike detection)
and Do_clustering (for spike sorting). Parameters are set in the
first lines.They both go through all the files set in Files.txt.
• Unsupervised results will be saved and printed (either in the
printer or in a file), but can be later changed with the GUI. For
changing results, you have to load the file with the (pre-
clustered) option. The nice thing is that results for all
temperatures are stored, so changing things with the GUI
mainly implies storing a different set of results rather than
doing the clustering again. Note that using the GUI for
clustering (e.g. with the ASCII option) does not store the
clustering results for future uses.
23. You are now a clustering expert!
• If you want further details on the method, check:
Unsupervised spike sorting with wavelets and superparamagnetic clustering
R. Quian Quiroga, Z. Nadasdy and Y. Ben-Shaul.
Neural Computation 16, 1661-1687; 2004.
• If you want to keep updated on new versions, give me
some comments or feedback on how wave_ clus
works with your data (I would love to hear about it),
etc. please email me at: rodri@vis.caltech.edu
• Good luck and hope it’s useful!