1. The document discusses several methods for analyzing neuronal morphology from digital images, including Sholl analysis, NeuronJ, and NeuriteTracer.
2. It provides details on each method such as average time cost, limitations, and publishing date. NeuriteTracer has the fastest average time of 0.3-0.5 seconds but only measures total length from images.
3. The document also describes an automated process for neuronal reconstruction involving thresholding, sampling, modeling, structuring, and measuring pixels from images as well as references that detail techniques for automatic thresholding and skeletonization of point clouds.
Development of a Virtual Reality Simulator for Robotic Brain Tumor Resectionsaulnml
This document describes the development of a virtual reality simulator for robotic brain tumor resection. Key components included realistic tissue deformation and cutting, force feedback, and dynamic motion scaling. Preliminary results found that using force feedback and dynamic motion scaling reduced collisions with healthy brain tissue and improved accuracy. Future work will add full robot kinematics, virtual fixtures, and other enhancements to simulate the procedure more realistically. The goal is to create an effective training tool to help surgeons practice complex neurosurgeries.
Introduction to resting state fMRI preprocessing and analysisCameron Craddock
from Australia Connectomes course 2018 in Melbourne, Australia. A brief introduction to CPAC and an in depth lecture on how to preprocessing functional MRI data.
Online Vigilance Analysis Combining Video and Electrooculography FeaturesRuofei Du
http://www.duruofei.com/Research/drowsydriving
In this paper, we propose a novel system to analyze vigilance level combining both video and Electrooculography (EOG) features. For one thing,
the video features extracted from an infrared camera include percentage of closure (PERCLOS) and eye blinks, slow eye movement (SEM), rapid eye movement (REM) are also extracted from EOG signals. For another, other features like yawn frequency, body posture and face orientation are extracted from the video by using Active Shape Model (ASM). The results of our experiments indicate that our approach outperforms the existing approaches based on either video or EOG merely. In addition, the prediction offered by our model is in close proximity to the actual error rate of the subject. We firmly believe that this method can be widely applied to prevent accidents like fatigued driving in the future.
ASRT at 2015 RSNA Annual Meeting CT scanning Face Transplant Surgical PlanningFrank Rybicki
Frank Rybicki lecture at the 2015 Annual Meeting of the Radiological Society of North America (RSNA). This lecture was invited by the American Society of Radiologic Technologists (ASRT) and features amazing technologists from my past position in Boston Massachusetts, at Brigham and Women's Hospital. Many of these slides were generous provided by Bo Pomahac, MD, brilliant surgeon, caring physician, and wonderful friend. I have removed the images of Bo's patients. We have permission to publish them, but I wanted to play it safe to avoid any possible complaints
(1) The speaker presented a large-scale image database containing over 1,000 orthopaedic surgery cases used to develop automated segmentation and statistical modeling techniques.
(2) Automated segmentation algorithms were developed to reconstruct anatomical structures and segment bones, muscles, and other tissues from CT images.
(3) Potential applications of the database and models include analyzing joint positioning, modeling disease progression, estimating muscle fiber arrangement, and developing statistical models of surgical expertise.
Top Cited Article in Informatics Engineering Research: October 2020ieijjournal
Informatics is rapidly developing field. The study of informatics involves human-computer interaction and how an interface can be built to maximize user-efficiency. Due to the growth in IT, individuals and organizations increasingly process information digitally. This has led to the study of informatics to solve privacy, security, healthcare, education, poverty, and challenges in our environment. The Informatics Engineering, an International Journal (IEIJ) is a open access peer-reviewed journal that publishes articles which contribute new results in all areas of Informatics. The goal of this journal is to bring together researchers and practitioners from academia and industry to focus on the human use of computing fields such as communication, mathematics, multimedia, and human-computer interaction design and establishing new collaborations in these areas.
This paper presents a new local facial feature descriptor, Local Gray Code Pattern (LGCP), for facial expression recognition in contrast to widely adopted Local Binary pattern. Local Gray Code Pattern (LGCP) characterizes both the texture and contrast information of facial components. The LGCP descriptor is obtained using local gray color intensity differences from a local 3x3 pixels area weighted by their corresponding TF (term frequency). I have used extended Cohn-Kanade expression (CK+) dataset and Japanese Female Facial Expression (JAFFE) dataset with a Multiclass Support Vector Machine (LIBSVM) to evaluate proposed method. The proposed method is performed on six and seven basic expression classes in both person dependent and independent environment. According to extensive experimental results with prototypic expressions on static images, proposed method has achieved the highest recognition rate, as compared to other existing appearance-based feature descriptors LPQ, LBP, LBPU2, LBPRI, and LBPRIU2.
A study of a modified histogram based fast enhancement algorithm (mhbfe)sipij
Image enhancement is one of the most important issues in low-level image processing. The goal of image
enhancement is to improve the quality of an image such that enhanced image is better than the original
image. Conventional Histogram equalization (HE) is one of the most algorithms used in the contrast
enhancement of medical images, this due to its simplicity and effectiveness. However, it causes the
unnatural look and visual artefacts, where it tends to change the brightness of an images. The Histogram
Based Fast Enhancement Algorithm (HBFE) tries to enhance the CT head images, where it improves the
water-washed effect caused by conventional histogram equalization algorithms with less complexity. It
depends on using full gray levels to enhance the soft tissues ignoring other image details. We present a
modification of this algorithm to be valid for most CT image types with keeping the degree of simplicity.
Experimental results show that The Modified Histogram Based Fast Enhancement Algorithm (MHBFE)
enhances the results in term of PSNR, AMBE and entropy. We use also the Statistical analysis to ensure
the improvement of the proposed modification that can be generalized. ANalysis Of VAriance (ANOVA) is
used as first to test whether or not all the results have the same average. Then we find the significant
improvement of the modification.
Advanced Computational Intelligence: An International Journal (ACII) is a quarterly open access peer-reviewed journal that publishes articles which contribute new results in all areas of computational intelligence. The goal of this journal is to bring together researchers and practitioners from academia and industry to focus on advanced computational intelligence concepts and establishing new collaborations in these areas.
Authors are solicited to contribute to this journal by submitting articles that illustrate research results, projects, surveying works and industrial experiences that describe significant advances in the computational intelligence.
The document summarizes a research project on single image haze removal using a variable fog-weight. It begins with an introduction on how haze degrades image quality and the need for haze removal techniques. It then discusses the motivation, literature review, objective, and main contribution of the proposed method. The method uses the dark channel prior to estimate the transmission map and atmospheric light. It then applies a variable fog-weight to modify the transmission map and reduce halo artifacts. A guided filter is used for transmission refinement before recovering the haze-free scene radiance. The method aims to improve on existing techniques by reducing time complexity and halo artifacts while enhancing image visibility.
The document describes a project to develop a gender voice recognition system using machine learning. It aims to achieve higher accuracy than existing MLP models. The proposed system uses logistic regression and fast Fourier transform for noise cancellation. It achieves 96.74% accuracy on test data, higher than existing systems. The document outlines the aim, abstract, introduction, literature review on existing approaches, proposed system description using algorithms like logistic regression and FFT, requirements, UML diagrams, advantages of automatic gender recognition, limitations, output, references, and conclusions.
Saliency Detection via Divergence Analysis: A Unified Perspective ICPR 2012Jia-Bin Huang
A number of bottom-up saliency detection algorithms have been proposed in the literature. Since these have been developed from intuition and principles inspired by psychophysical studies of human vision, the theoretical relations among them are unclear. In this paper, we present a unifying perspective. Saliency of an image area is defined in terms of divergence between certain feature distributions estimated from the
central part and its surround. We show that various, seemingly different saliency estimation algorithms are in fact closely related. We also discuss some commonly
used center-surround selection strategies. Experiments with two datasets are presented to quantify the relative advantages of these algorithms.
Best student paper award in Computer Vision and Robotics Track
The document summarizes research on daily living activity recognition using efficient combination of high and low level cues. The researchers propose an approach that fuses body pose estimation and low-level cues like optical flow to produce an enriched descriptor. A Fisher kernel representation is then used to model the temporal variation in video sequences for recognizing activities. The approach achieves state-of-the-art results on the ADL Rochester dataset.
Multi-legged Robot Walking Strategies, with an Emphasis on Image-based MethodsKazi Mostafa
The document outlines a research project on developing edge detection methods and walking strategies for multi-legged robots. It discusses using morphological operations on hexagonal grid images to remove noise and detect edges for low resolution images in real-time applications with low computational power. It describes developing structuring elements of various sizes and directions, and comparing performance of hexagonal versus rectangular grid images. The document also explores using fuzzy morphology and discusses evaluating different methods to determine the optimal approach for edge detection to enable efficient walking strategies for robots with damaged legs.
Development of a Virtual Reality Simulator for Robotic Brain Tumor Resectionsaulnml
This document describes the development of a virtual reality simulator for robotic brain tumor resection. Key components included realistic tissue deformation and cutting, force feedback, and dynamic motion scaling. Preliminary results found that using force feedback and dynamic motion scaling reduced collisions with healthy brain tissue and improved accuracy. Future work will add full robot kinematics, virtual fixtures, and other enhancements to simulate the procedure more realistically. The goal is to create an effective training tool to help surgeons practice complex neurosurgeries.
Introduction to resting state fMRI preprocessing and analysisCameron Craddock
from Australia Connectomes course 2018 in Melbourne, Australia. A brief introduction to CPAC and an in depth lecture on how to preprocessing functional MRI data.
Online Vigilance Analysis Combining Video and Electrooculography FeaturesRuofei Du
http://www.duruofei.com/Research/drowsydriving
In this paper, we propose a novel system to analyze vigilance level combining both video and Electrooculography (EOG) features. For one thing,
the video features extracted from an infrared camera include percentage of closure (PERCLOS) and eye blinks, slow eye movement (SEM), rapid eye movement (REM) are also extracted from EOG signals. For another, other features like yawn frequency, body posture and face orientation are extracted from the video by using Active Shape Model (ASM). The results of our experiments indicate that our approach outperforms the existing approaches based on either video or EOG merely. In addition, the prediction offered by our model is in close proximity to the actual error rate of the subject. We firmly believe that this method can be widely applied to prevent accidents like fatigued driving in the future.
ASRT at 2015 RSNA Annual Meeting CT scanning Face Transplant Surgical PlanningFrank Rybicki
Frank Rybicki lecture at the 2015 Annual Meeting of the Radiological Society of North America (RSNA). This lecture was invited by the American Society of Radiologic Technologists (ASRT) and features amazing technologists from my past position in Boston Massachusetts, at Brigham and Women's Hospital. Many of these slides were generous provided by Bo Pomahac, MD, brilliant surgeon, caring physician, and wonderful friend. I have removed the images of Bo's patients. We have permission to publish them, but I wanted to play it safe to avoid any possible complaints
(1) The speaker presented a large-scale image database containing over 1,000 orthopaedic surgery cases used to develop automated segmentation and statistical modeling techniques.
(2) Automated segmentation algorithms were developed to reconstruct anatomical structures and segment bones, muscles, and other tissues from CT images.
(3) Potential applications of the database and models include analyzing joint positioning, modeling disease progression, estimating muscle fiber arrangement, and developing statistical models of surgical expertise.
Top Cited Article in Informatics Engineering Research: October 2020ieijjournal
Informatics is rapidly developing field. The study of informatics involves human-computer interaction and how an interface can be built to maximize user-efficiency. Due to the growth in IT, individuals and organizations increasingly process information digitally. This has led to the study of informatics to solve privacy, security, healthcare, education, poverty, and challenges in our environment. The Informatics Engineering, an International Journal (IEIJ) is a open access peer-reviewed journal that publishes articles which contribute new results in all areas of Informatics. The goal of this journal is to bring together researchers and practitioners from academia and industry to focus on the human use of computing fields such as communication, mathematics, multimedia, and human-computer interaction design and establishing new collaborations in these areas.
This paper presents a new local facial feature descriptor, Local Gray Code Pattern (LGCP), for facial expression recognition in contrast to widely adopted Local Binary pattern. Local Gray Code Pattern (LGCP) characterizes both the texture and contrast information of facial components. The LGCP descriptor is obtained using local gray color intensity differences from a local 3x3 pixels area weighted by their corresponding TF (term frequency). I have used extended Cohn-Kanade expression (CK+) dataset and Japanese Female Facial Expression (JAFFE) dataset with a Multiclass Support Vector Machine (LIBSVM) to evaluate proposed method. The proposed method is performed on six and seven basic expression classes in both person dependent and independent environment. According to extensive experimental results with prototypic expressions on static images, proposed method has achieved the highest recognition rate, as compared to other existing appearance-based feature descriptors LPQ, LBP, LBPU2, LBPRI, and LBPRIU2.
A study of a modified histogram based fast enhancement algorithm (mhbfe)sipij
Image enhancement is one of the most important issues in low-level image processing. The goal of image
enhancement is to improve the quality of an image such that enhanced image is better than the original
image. Conventional Histogram equalization (HE) is one of the most algorithms used in the contrast
enhancement of medical images, this due to its simplicity and effectiveness. However, it causes the
unnatural look and visual artefacts, where it tends to change the brightness of an images. The Histogram
Based Fast Enhancement Algorithm (HBFE) tries to enhance the CT head images, where it improves the
water-washed effect caused by conventional histogram equalization algorithms with less complexity. It
depends on using full gray levels to enhance the soft tissues ignoring other image details. We present a
modification of this algorithm to be valid for most CT image types with keeping the degree of simplicity.
Experimental results show that The Modified Histogram Based Fast Enhancement Algorithm (MHBFE)
enhances the results in term of PSNR, AMBE and entropy. We use also the Statistical analysis to ensure
the improvement of the proposed modification that can be generalized. ANalysis Of VAriance (ANOVA) is
used as first to test whether or not all the results have the same average. Then we find the significant
improvement of the modification.
Advanced Computational Intelligence: An International Journal (ACII) is a quarterly open access peer-reviewed journal that publishes articles which contribute new results in all areas of computational intelligence. The goal of this journal is to bring together researchers and practitioners from academia and industry to focus on advanced computational intelligence concepts and establishing new collaborations in these areas.
Authors are solicited to contribute to this journal by submitting articles that illustrate research results, projects, surveying works and industrial experiences that describe significant advances in the computational intelligence.
The document summarizes a research project on single image haze removal using a variable fog-weight. It begins with an introduction on how haze degrades image quality and the need for haze removal techniques. It then discusses the motivation, literature review, objective, and main contribution of the proposed method. The method uses the dark channel prior to estimate the transmission map and atmospheric light. It then applies a variable fog-weight to modify the transmission map and reduce halo artifacts. A guided filter is used for transmission refinement before recovering the haze-free scene radiance. The method aims to improve on existing techniques by reducing time complexity and halo artifacts while enhancing image visibility.
The document describes a project to develop a gender voice recognition system using machine learning. It aims to achieve higher accuracy than existing MLP models. The proposed system uses logistic regression and fast Fourier transform for noise cancellation. It achieves 96.74% accuracy on test data, higher than existing systems. The document outlines the aim, abstract, introduction, literature review on existing approaches, proposed system description using algorithms like logistic regression and FFT, requirements, UML diagrams, advantages of automatic gender recognition, limitations, output, references, and conclusions.
Saliency Detection via Divergence Analysis: A Unified Perspective ICPR 2012Jia-Bin Huang
A number of bottom-up saliency detection algorithms have been proposed in the literature. Since these have been developed from intuition and principles inspired by psychophysical studies of human vision, the theoretical relations among them are unclear. In this paper, we present a unifying perspective. Saliency of an image area is defined in terms of divergence between certain feature distributions estimated from the
central part and its surround. We show that various, seemingly different saliency estimation algorithms are in fact closely related. We also discuss some commonly
used center-surround selection strategies. Experiments with two datasets are presented to quantify the relative advantages of these algorithms.
Best student paper award in Computer Vision and Robotics Track
The document summarizes research on daily living activity recognition using efficient combination of high and low level cues. The researchers propose an approach that fuses body pose estimation and low-level cues like optical flow to produce an enriched descriptor. A Fisher kernel representation is then used to model the temporal variation in video sequences for recognizing activities. The approach achieves state-of-the-art results on the ADL Rochester dataset.
Multi-legged Robot Walking Strategies, with an Emphasis on Image-based MethodsKazi Mostafa
The document outlines a research project on developing edge detection methods and walking strategies for multi-legged robots. It discusses using morphological operations on hexagonal grid images to remove noise and detect edges for low resolution images in real-time applications with low computational power. It describes developing structuring elements of various sizes and directions, and comparing performance of hexagonal versus rectangular grid images. The document also explores using fuzzy morphology and discusses evaluating different methods to determine the optimal approach for edge detection to enable efficient walking strategies for robots with damaged legs.
HCL Notes und Domino Lizenzkostenreduzierung in der Welt von DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-und-domino-lizenzkostenreduzierung-in-der-welt-von-dlau/
DLAU und die Lizenzen nach dem CCB- und CCX-Modell sind für viele in der HCL-Community seit letztem Jahr ein heißes Thema. Als Notes- oder Domino-Kunde haben Sie vielleicht mit unerwartet hohen Benutzerzahlen und Lizenzgebühren zu kämpfen. Sie fragen sich vielleicht, wie diese neue Art der Lizenzierung funktioniert und welchen Nutzen sie Ihnen bringt. Vor allem wollen Sie sicherlich Ihr Budget einhalten und Kosten sparen, wo immer möglich. Das verstehen wir und wir möchten Ihnen dabei helfen!
Wir erklären Ihnen, wie Sie häufige Konfigurationsprobleme lösen können, die dazu führen können, dass mehr Benutzer gezählt werden als nötig, und wie Sie überflüssige oder ungenutzte Konten identifizieren und entfernen können, um Geld zu sparen. Es gibt auch einige Ansätze, die zu unnötigen Ausgaben führen können, z. B. wenn ein Personendokument anstelle eines Mail-Ins für geteilte Mailboxen verwendet wird. Wir zeigen Ihnen solche Fälle und deren Lösungen. Und natürlich erklären wir Ihnen das neue Lizenzmodell.
Nehmen Sie an diesem Webinar teil, bei dem HCL-Ambassador Marc Thomas und Gastredner Franz Walder Ihnen diese neue Welt näherbringen. Es vermittelt Ihnen die Tools und das Know-how, um den Überblick zu bewahren. Sie werden in der Lage sein, Ihre Kosten durch eine optimierte Domino-Konfiguration zu reduzieren und auch in Zukunft gering zu halten.
Diese Themen werden behandelt
- Reduzierung der Lizenzkosten durch Auffinden und Beheben von Fehlkonfigurationen und überflüssigen Konten
- Wie funktionieren CCB- und CCX-Lizenzen wirklich?
- Verstehen des DLAU-Tools und wie man es am besten nutzt
- Tipps für häufige Problembereiche, wie z. B. Team-Postfächer, Funktions-/Testbenutzer usw.
- Praxisbeispiele und Best Practices zum sofortigen Umsetzen
In the rapidly evolving landscape of technologies, XML continues to play a vital role in structuring, storing, and transporting data across diverse systems. The recent advancements in artificial intelligence (AI) present new methodologies for enhancing XML development workflows, introducing efficiency, automation, and intelligent capabilities. This presentation will outline the scope and perspective of utilizing AI in XML development. The potential benefits and the possible pitfalls will be highlighted, providing a balanced view of the subject.
We will explore the capabilities of AI in understanding XML markup languages and autonomously creating structured XML content. Additionally, we will examine the capacity of AI to enrich plain text with appropriate XML markup. Practical examples and methodological guidelines will be provided to elucidate how AI can be effectively prompted to interpret and generate accurate XML markup.
Further emphasis will be placed on the role of AI in developing XSLT, or schemas such as XSD and Schematron. We will address the techniques and strategies adopted to create prompts for generating code, explaining code, or refactoring the code, and the results achieved.
The discussion will extend to how AI can be used to transform XML content. In particular, the focus will be on the use of AI XPath extension functions in XSLT, Schematron, Schematron Quick Fixes, or for XML content refactoring.
The presentation aims to deliver a comprehensive overview of AI usage in XML development, providing attendees with the necessary knowledge to make informed decisions. Whether you’re at the early stages of adopting AI or considering integrating it in advanced XML development, this presentation will cover all levels of expertise.
By highlighting the potential advantages and challenges of integrating AI with XML development tools and languages, the presentation seeks to inspire thoughtful conversation around the future of XML development. We’ll not only delve into the technical aspects of AI-powered XML development but also discuss practical implications and possible future directions.
HCL Notes and Domino License Cost Reduction in the World of DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-and-domino-license-cost-reduction-in-the-world-of-dlau/
The introduction of DLAU and the CCB & CCX licensing model caused quite a stir in the HCL community. As a Notes and Domino customer, you may have faced challenges with unexpected user counts and license costs. You probably have questions on how this new licensing approach works and how to benefit from it. Most importantly, you likely have budget constraints and want to save money where possible. Don’t worry, we can help with all of this!
We’ll show you how to fix common misconfigurations that cause higher-than-expected user counts, and how to identify accounts which you can deactivate to save money. There are also frequent patterns that can cause unnecessary cost, like using a person document instead of a mail-in for shared mailboxes. We’ll provide examples and solutions for those as well. And naturally we’ll explain the new licensing model.
Join HCL Ambassador Marc Thomas in this webinar with a special guest appearance from Franz Walder. It will give you the tools and know-how to stay on top of what is going on with Domino licensing. You will be able lower your cost through an optimized configuration and keep it low going forward.
These topics will be covered
- Reducing license cost by finding and fixing misconfigurations and superfluous accounts
- How do CCB and CCX licenses really work?
- Understanding the DLAU tool and how to best utilize it
- Tips for common problem areas, like team mailboxes, functional/test users, etc
- Practical examples and best practices to implement right away
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/building-and-scaling-ai-applications-with-the-nx-ai-manager-a-presentation-from-network-optix/
Robin van Emden, Senior Director of Data Science at Network Optix, presents the “Building and Scaling AI Applications with the Nx AI Manager,” tutorial at the May 2024 Embedded Vision Summit.
In this presentation, van Emden covers the basics of scaling edge AI solutions using the Nx tool kit. He emphasizes the process of developing AI models and deploying them globally. He also showcases the conversion of AI models and the creation of effective edge AI pipelines, with a focus on pre-processing, model conversion, selecting the appropriate inference engine for the target hardware and post-processing.
van Emden shows how Nx can simplify the developer’s life and facilitate a rapid transition from concept to production-ready applications.He provides valuable insights into developing scalable and efficient edge AI solutions, with a strong focus on practical implementation.
GraphSummit Singapore | The Future of Agility: Supercharging Digital Transfor...Neo4j
Leonard Jayamohan, Partner & Generative AI Lead, Deloitte
This keynote will reveal how Deloitte leverages Neo4j’s graph power for groundbreaking digital twin solutions, achieving a staggering 100x performance boost. Discover the essential role knowledge graphs play in successful generative AI implementations. Plus, get an exclusive look at an innovative Neo4j + Generative AI solution Deloitte is developing in-house.
Full-RAG: A modern architecture for hyper-personalizationZilliz
Mike Del Balso, CEO & Co-Founder at Tecton, presents "Full RAG," a novel approach to AI recommendation systems, aiming to push beyond the limitations of traditional models through a deep integration of contextual insights and real-time data, leveraging the Retrieval-Augmented Generation architecture. This talk will outline Full RAG's potential to significantly enhance personalization, address engineering challenges such as data management and model training, and introduce data enrichment with reranking as a key solution. Attendees will gain crucial insights into the importance of hyperpersonalization in AI, the capabilities of Full RAG for advanced personalization, and strategies for managing complex data integrations for deploying cutting-edge AI solutions.
Let's Integrate MuleSoft RPA, COMPOSER, APM with AWS IDP along with Slackshyamraj55
Discover the seamless integration of RPA (Robotic Process Automation), COMPOSER, and APM with AWS IDP enhanced with Slack notifications. Explore how these technologies converge to streamline workflows, optimize performance, and ensure secure access, all while leveraging the power of AWS IDP and real-time communication via Slack notifications.
Cosa hanno in comune un mattoncino Lego e la backdoor XZ?Speck&Tech
ABSTRACT: A prima vista, un mattoncino Lego e la backdoor XZ potrebbero avere in comune il fatto di essere entrambi blocchi di costruzione, o dipendenze di progetti creativi e software. La realtà è che un mattoncino Lego e il caso della backdoor XZ hanno molto di più di tutto ciò in comune.
Partecipate alla presentazione per immergervi in una storia di interoperabilità, standard e formati aperti, per poi discutere del ruolo importante che i contributori hanno in una comunità open source sostenibile.
BIO: Sostenitrice del software libero e dei formati standard e aperti. È stata un membro attivo dei progetti Fedora e openSUSE e ha co-fondato l'Associazione LibreItalia dove è stata coinvolta in diversi eventi, migrazioni e formazione relativi a LibreOffice. In precedenza ha lavorato a migrazioni e corsi di formazione su LibreOffice per diverse amministrazioni pubbliche e privati. Da gennaio 2020 lavora in SUSE come Software Release Engineer per Uyuni e SUSE Manager e quando non segue la sua passione per i computer e per Geeko coltiva la sua curiosità per l'astronomia (da cui deriva il suo nickname deneb_alpha).
Essentials of Automations: The Art of Triggers and Actions in FMESafe Software
In this second installment of our Essentials of Automations webinar series, we’ll explore the landscape of triggers and actions, guiding you through the nuances of authoring and adapting workspaces for seamless automations. Gain an understanding of the full spectrum of triggers and actions available in FME, empowering you to enhance your workspaces for efficient automation.
We’ll kick things off by showcasing the most commonly used event-based triggers, introducing you to various automation workflows like manual triggers, schedules, directory watchers, and more. Plus, see how these elements play out in real scenarios.
Whether you’re tweaking your current setup or building from the ground up, this session will arm you with the tools and insights needed to transform your FME usage into a powerhouse of productivity. Join us to discover effective strategies that simplify complex processes, enhancing your productivity and transforming your data management practices with FME. Let’s turn complexity into clarity and make your workspaces work wonders!
Goodbye Windows 11: Make Way for Nitrux Linux 3.5.0!SOFTTECHHUB
As the digital landscape continually evolves, operating systems play a critical role in shaping user experiences and productivity. The launch of Nitrux Linux 3.5.0 marks a significant milestone, offering a robust alternative to traditional systems such as Windows 11. This article delves into the essence of Nitrux Linux 3.5.0, exploring its unique features, advantages, and how it stands as a compelling choice for both casual users and tech enthusiasts.
GraphRAG for Life Science to increase LLM accuracyTomaz Bratanic
GraphRAG for life science domain, where you retriever information from biomedical knowledge graphs using LLMs to increase the accuracy and performance of generated answers
Unlock the Future of Search with MongoDB Atlas_ Vector Search Unleashed.pdfMalak Abu Hammad
Discover how MongoDB Atlas and vector search technology can revolutionize your application's search capabilities. This comprehensive presentation covers:
* What is Vector Search?
* Importance and benefits of vector search
* Practical use cases across various industries
* Step-by-step implementation guide
* Live demos with code snippets
* Enhancing LLM capabilities with vector search
* Best practices and optimization strategies
Perfect for developers, AI enthusiasts, and tech leaders. Learn how to leverage MongoDB Atlas to deliver highly relevant, context-aware search results, transforming your data retrieval process. Stay ahead in tech innovation and maximize the potential of your applications.
#MongoDB #VectorSearch #AI #SemanticSearch #TechInnovation #DataScience #LLM #MachineLearning #SearchTechnology
Climate Impact of Software Testing at Nordic Testing DaysKari Kakkonen
My slides at Nordic Testing Days 6.6.2024
Climate impact / sustainability of software testing discussed on the talk. ICT and testing must carry their part of global responsibility to help with the climat warming. We can minimize the carbon footprint but we can also have a carbon handprint, a positive impact on the climate. Quality characteristics can be added with sustainability, and then measured continuously. Test environments can be used less, and in smaller scale and on demand. Test techniques can be used in optimizing or minimizing number of tests. Test automation can be used to speed up testing.
Sudheer Mechineni, Head of Application Frameworks, Standard Chartered Bank
Discover how Standard Chartered Bank harnessed the power of Neo4j to transform complex data access challenges into a dynamic, scalable graph database solution. This keynote will cover their journey from initial adoption to deploying a fully automated, enterprise-grade causal cluster, highlighting key strategies for modelling organisational changes and ensuring robust disaster recovery. Learn how these innovations have not only enhanced Standard Chartered Bank’s data infrastructure but also positioned them as pioneers in the banking sector’s adoption of graph technology.
2. • Which structure can cause disorder?
• What is the most affective factor?
▲ A. J. Billnitzer, et al. (2013)
APP independent and dependent effects on neurite outgrowth
are modulated by the receptor associated protein (RAP)
3. ▲ Halavi M, Hamilton KA, Parekh R, Ascoli GA. (2012)
Digital reconstructions of neuronal morphology: three decades of research trends
4. Sholl Analysis
NeuronJ
NeuriteTracer
Method
▲ AM. Magariños et al. (2006)
Method by D. A. Sholl, (1953)
▲ E. Meijering et al. (2004)
▲ M. Pool, et al. (2008)
Approach
Counting distance boundary
Manual tracing (path finding)
Convolution skeletonize
Published
1953
2004
2008
Average time cost
-
More than 1 min.
0.3~0.5 seconds
Major Limitation
Indirect measurement
Time & handling cost
Total length/Image only
17. • Automatic thresholding: K.Srinivas, V. Srikanth, (2011) Automatic histogram threshold with fuzzy measures using C-means
• L-median skeletonize: H. Huang, et al. (2013) L1-Medial skeleton of point cloud
• Page 2. A. J. Billnitzer, et al. (2013) APP independent and dependent effects on neurite outgrowth are modulated by the receptor
associated protein (RAP)
• Page 3. M. Halavi, et al. (2012) Digital reconstructions of neuronal morphology: three decades of research trends.
• Page 4. (Left) AM. Magariños et al. (2006) Rapid and reversible changes in intrahippocampal connectivity during the course of
hibernation in European hamsters
• Page 4. (Right) M . Pool, et al., (2008) NeuriteTracer: A novel ImageJ plugin for automated quantification of neurite outgrowth
• Page 16. (Left) http://en.wikipedia.org/wiki/Sanford-Burnham_Medical_Research_Institute
• Page 16. (Right) http://www.wired.co.uk/magazine/archive/2013/09/start/clear-thinking
Good morning, I am Song yonggeunfrom Takumi Lab. I’m here to report a new method, that might beneficial to your research, that I found during my master research.
There is a study named “Neurite Morphology”. It analyze neuron shape, quantitatively, through images.As you know, neuron is primary element of mental system, and I’m sure most of you knows better, what kind of tradgedy can happen if we have deficit on it. Alzheimer, Autism spectrum disorder…Neurite morphology stands for, to figure out the mechanism, answer these questions.
The study,especially program-aided neurite morphology is now growing. Number of published papers that have used software is increasing, but I can’t think this is fast enough.There are much much more studies has neurite images, never dreamed of analyzing it.1min
The problem here, is today’s solutions are limited. Typically whenthey faced large-scale data.For example, left - traditional method “Sholl analysis” which is widely accepted until today, has risk of biases because it is indirect measurement.NeuronJ on the middle, and those relative tools, requires a lot of manual handling. You have to make trace one by one.And NeuriteTracer, quite popular today, can only show total length per an image as an output.There are many other tools, but each one of them has their own limitation like these.Most tools are fine if we have dozens of images. But what if it’s Thousands of images? What if Millions?
There I realized, A new method is urgently needed.It has to be automatic, easier to be adapted.Comprehensive, providing structural details.Reliable, the most, needless to say, the measurement have to be accurate. I started my try from this point.Classical methods had many obstacles to be automated. The term “automate”, wasn’t just meant for easy-handling.To automate the process, It had to overcome various conditions; There are many different microscopes with different settings – different resolutions, zoom levels or so.Directly to the conclusion, I got the solution and I named it “T4trace” and I’d like to show you “How”.2min.
T4trace builds reconstruct model from loaded image data, then measure the model and output.Let me explain the procedure, step-by-step.
Before begin,In a computer, an image file you usually see isn’t that visual. It is actually a sets of numbers.It is called “Bitmap data”, or “Raster Image data”.You can see it clearly, when you zoom-in closer.A cell in the grids is called a pixel. Pixel contains attributes for the area. Most important parts are color intensity. R.G.B. here.3min.
Most generally used 8-bit format, you’ll have two-to-power-of-eight (256) levels of intensity, per a color.As intensity is higher, signal is getting stronger, therefore the pixel is brighter.From a 1,000 sample images I got, I could get rough-sense on those ranges and distribution from my experience. Histogram on the left. Greens are signal, Pinks are noises here.You may noticed that It is definitely impossible to clear out all the noises completely.To solve the problem, I picked up 2011 published fuzzy threshold algorithm.It first pick a cut-off. Separate each parts, get mean expectation from both side. then take center, Move over, until it hits the value visited before.If you start from here, the value bouncing up-and-down, till it’s getting converged into proper range.On the sample set, the threshold value usually ranged here, around 80(+/-) 20, so you can see it worked.-4min.
Next step is sampling phase.As I mentioned on “Bitmap data”, the image here, is also a set of numbers.We human, can simply see, distinguish, then measure. But a machine can not make such naïve decision.So, We must convert those numbers to a computable object. What I call “Node”.One dot here indicates one node.Then bring the nodes into computation.
To build skeleton structure, I adapted 2013 published L1-medial skeletonize method, then I made some changes.Basic Idea is simple. Let’s imagine you have a loaf of bread. Then you inject special Anko, red-bean in it, that can freely move around inside of the bread.What if, those special anko, interacting, gathering into the core part of the bread? You’ll get the seam inside, like skeleton of the bread. Exactly like that.The criteria is two things: Number of neighbors in given distance boundary, and linearity.For each nodes, If there is not enough active neighbors near by, fix it. That’s done for it.You can see those color had been changed to green.If there’s enough neighbors to test, and if they were aligned in a line, the node will getting nudged into the line.else, merge neighbors to the node.Doubling the boundary, Do the same round again, until every nodes are getting fixed.5 min.
After all nodes getting fixed, we link nodes, to build a reconstruct model.One trick here is – since soma usually shows most strongest signal above all,Pick the biggest node as a soma, then expand the structure to nearbys.If you overlay the model into source image, you can see it nicely matched.6min
At this moment, let me show you a brief demo.The sample images were provided from other experiment, Primary culture, Ctxneuron from ICR E15 mice.If you double-click an image on the list, you’ll get the result instantly, in a wink.Plus, You can check procedure, with interactive image layers.And also quantified structure data.For “high-throughput”, Let me show you, a 100 image processing on the fly.As you can see here, you don’t have much parameters. Actually these are optional. You don’t really need to touch these- There you go.7min.
Now, back to the presentation, It seems nice, but are those numbers are correct?I made 100 Manual trace, take that as a control. Then compared to program tracing outputs.Say if you got 1,000 pxlength from manual trace. And you got 900 with the same source by a program, then the difference rate will be 10%.In that sense, I compared T4trace with NT.T4T were definitely more accurate NT. Plus, the variance was relatively small, which indicates it rarely goes wrong.
You can confirm the fact with correlation, too.T4trace showed 10% higher correlation than NT, Therefore, We can guarantee that T4trace is accurate – at least, better than NT.8min.
So here we found a new solution, that is automatic, comprehensive, and reliable.Addition to it, “T4trace” is also universal that can be run in most desktop-environment, I mean You can try it on your labtop, with your own data.And it is definitely fast and easy to use. I hope you to try once.
I’m expecting the solution might helphigh-throughput screening, or large-scale reconstruct.Or, Maybe It can find its usability on outside of neurite morphology, i.e. cell detection or so.
This presentation has refered many as listed here.I’d like you to focus Top two, about algorithms.Others are image refereces used on these slides.
You can freelydownload the software from the website. I hope you to try T4trace with your own data.Just before closing, I want to thank to K. Fukumoto and Takumi sensei for advisory.This concludes my talk.Thank you very much