Evaluation of Biocontrol agents against Lasiodiplodia theobromae causing Infl...IOSR Journals
This study evaluated the biocontrol potential of Trichoderma viride and Aspergillus niger against Lasiodiplodia theobromae, the causal agent of inflorescence blight disease in cashew. In dual culture tests, both T. viride and A. niger significantly inhibited the growth of L. theobromae compared to the control. A. niger exhibited the highest inhibition, reducing pathogen growth by 74.7% in one technique. In a second technique, T. viride showed the strongest antagonism, limiting pathogen growth by 90.5%. The study suggests that both tested biocontrol agents, particularly A. niger, have potential for managing inflorescence
The psychiatrist José Miguel Gaona has published a book analyzing how the power of the mind can influence physical health and the time of death. He observed people who appeared healthy dying suddenly due to stress, and patients with AIDS who entered depressive states after their diagnosis and then died. Gaona also notes that many people who have near-death experiences report seeing tunnels, lights, or deceased relatives, suggesting the mind can have experiences just before death. He concludes these near-death experiences follow complex patterns and symbolism rather than being random neurological events.
- The document reports needle stick injury statistics from 2011-2010 at a medical facility, with over 65% involving nurses and 4% involving paramedics.
- It provides guidelines for safe sharps handling including proper PPE, safe disposal, and steps to take in the event of a needle stick injury.
- Questions about the viability of HIV, Hepatitis A, B, and C viruses outside the body are addressed, noting they can survive from very short periods to months depending on the virus and whether the blood is wet or dry.
De-virtualizing virtual Function Calls using various Type Analysis Technique...IOSR Journals
This document discusses techniques for optimizing virtual function calls in object-oriented programming languages. Virtual function calls are indirect calls that involve lookup through a virtual function table (VFT) at runtime, which has performance overhead compared to direct calls. Various static analysis techniques like Class Hierarchy Analysis (CHA) and Rapid Type Analysis (RTA) aim to resolve some virtual calls by determining the possible target types and replacing indirect calls with direct calls if a single target is possible. CHA uses the class hierarchy and declared types to determine possible target types, while RTA also considers instantiated types in the program to further reduce possible targets. The document analyzes examples to demonstrate how CHA and RTA can optimize some virtual calls.
Este documento presenta las secciones clave para desarrollar un proyecto de investigación, incluyendo la identificación del problema, la formulación de objetivos generales y específicos, el marco teórico, la hipótesis o supuesto, las variables, la población o muestra, las técnicas e instrumentos de recolección de datos, y los procedimientos y análisis de datos.
Evaluation of Biocontrol agents against Lasiodiplodia theobromae causing Infl...IOSR Journals
This study evaluated the biocontrol potential of Trichoderma viride and Aspergillus niger against Lasiodiplodia theobromae, the causal agent of inflorescence blight disease in cashew. In dual culture tests, both T. viride and A. niger significantly inhibited the growth of L. theobromae compared to the control. A. niger exhibited the highest inhibition, reducing pathogen growth by 74.7% in one technique. In a second technique, T. viride showed the strongest antagonism, limiting pathogen growth by 90.5%. The study suggests that both tested biocontrol agents, particularly A. niger, have potential for managing inflorescence
The psychiatrist José Miguel Gaona has published a book analyzing how the power of the mind can influence physical health and the time of death. He observed people who appeared healthy dying suddenly due to stress, and patients with AIDS who entered depressive states after their diagnosis and then died. Gaona also notes that many people who have near-death experiences report seeing tunnels, lights, or deceased relatives, suggesting the mind can have experiences just before death. He concludes these near-death experiences follow complex patterns and symbolism rather than being random neurological events.
- The document reports needle stick injury statistics from 2011-2010 at a medical facility, with over 65% involving nurses and 4% involving paramedics.
- It provides guidelines for safe sharps handling including proper PPE, safe disposal, and steps to take in the event of a needle stick injury.
- Questions about the viability of HIV, Hepatitis A, B, and C viruses outside the body are addressed, noting they can survive from very short periods to months depending on the virus and whether the blood is wet or dry.
De-virtualizing virtual Function Calls using various Type Analysis Technique...IOSR Journals
This document discusses techniques for optimizing virtual function calls in object-oriented programming languages. Virtual function calls are indirect calls that involve lookup through a virtual function table (VFT) at runtime, which has performance overhead compared to direct calls. Various static analysis techniques like Class Hierarchy Analysis (CHA) and Rapid Type Analysis (RTA) aim to resolve some virtual calls by determining the possible target types and replacing indirect calls with direct calls if a single target is possible. CHA uses the class hierarchy and declared types to determine possible target types, while RTA also considers instantiated types in the program to further reduce possible targets. The document analyzes examples to demonstrate how CHA and RTA can optimize some virtual calls.
Este documento presenta las secciones clave para desarrollar un proyecto de investigación, incluyendo la identificación del problema, la formulación de objetivos generales y específicos, el marco teórico, la hipótesis o supuesto, las variables, la población o muestra, las técnicas e instrumentos de recolección de datos, y los procedimientos y análisis de datos.
Jamming Attacks Prevention in Wireless Networks Using Packet Hiding MethodsIOSR Journals
This document discusses selective jamming attacks in wireless networks and methods to prevent them. It begins by introducing the open nature of wireless networks leaves them vulnerable to jamming attacks. It then discusses different types of jamming attacks and notes that selective jamming, which targets specific important packets, is more effective than continuous jamming. The document proposes using cryptographic techniques like commitment schemes and puzzles combined with physical layer parameters to prevent real-time packet classification and selective jamming. It reviews related work on jamming attacks and defenses. Finally, it outlines the problem statement, system model, and the contribution of using symmetric encryption and resisting brute force block encryption attacks to reduce jamming through packet hiding.
Classification By Clustering Based On Adjusted ClusterIOSR Journals
This document summarizes a research paper that proposes a new technique called "Classification by Clustering" (CbC) to define decision trees based on cluster analysis. The technique is tested on two large HR datasets. CbC involves running a clustering algorithm on the dataset without using the target variable, calculating the target variable distribution in each cluster, setting a threshold to classify entities, fine-tuning the results by weighting important attributes, and testing the results on new data. The paper finds that CbC can provide meaningful decision rules even when conventional decision trees fail to do so, and in some cases CbC performs better. A new evaluation measure called Weighted Group Score is also introduced to assess models when conventional measures cannot be used
A Novel PSNR-B Approach for Evaluating the Quality of De-blocked Images IOSR Journals
This document discusses evaluating the quality of deblocked images using different quality assessment metrics. It proposes a new metric called PSNR-B that includes a blocking effect factor in PSNR calculations. The document compares PSNR-B to PSNR and SSIM metrics. It studies the effect of quantization step size on measured image quality and analyzes how deblocking algorithms like lowpass filtering can reduce blocking artifacts but also introduce new distortions. Simulation results show PSNR-B correlates better than PSNR with subjective quality judgments of deblocked images.
A New Theoretical Approach to Location Based Power Aware RoutingIOSR Journals
This document proposes a new theoretical approach to location based power aware routing in mobile ad hoc networks (MANETs). It aims to extend the network lifetime by improving power utilization during routing. The approach uses nodes' location information, remaining battery power, and bandwidth status to assign link stability and select routes with lower "uptime values" and minimum bandwidth over time. This is hypothesized to better utilize nodes' power sources and bandwidth. The document outlines calculating a root up time factor for each node based on its power backup and required power, and only using nodes with maximum backup. It concludes future work will design and validate a new protocol based on this approach.
Combining both Plug-in Vehicles and Renewable Energy Resources for Unit Commi...IOSR Journals
This document presents a study that combines plug-in electric vehicles with vehicle-to-grid technology (V2G), renewable energy resources like wind and solar, and existing power plants, to optimize unit commitment in smart grids. The goal is to minimize total costs and emissions. A genetic algorithm is used to optimize scheduling of generation units, V2G vehicles providing spinning reserves, and time-varying renewable sources over a 24-hour period to meet load demand at lowest cost while satisfying constraints. Simulation results validate that integrating V2G and renewable energy sources can effectively reduce costs and emissions for the smart grid.
Requirements and Challenges for Securing Cloud Applications and ServicesIOSR Journals
This document discusses the requirements and challenges for securing cloud applications and services. It begins with an abstract that introduces cloud computing security as complex due to many factors. The document then provides context on cloud computing architectural frameworks and models to help evaluate security risks when adopting cloud services. It discusses key aspects of cloud architecture like deployment models, service models, and multi-tenancy that impact security. Understanding these relationships is important for informed risk management decisions regarding cloud adoption strategies.
Effect of Age of Spawned Catfish (Clarias Gariepinus) Broodstock on Quantity ...IOSR Journals
This study examined the effect of broodstock age on egg and milt production in catfish (Clarias gariepinus) and subsequent fry growth. Older female broodstock (24-30 months) produced more eggs (260-300g) than younger females (150-160g at 15-18 months). Similarly, older males (24-32 months) produced more milt (280-320g) than younger males (200-240g at 16-20 months). Hatchability and fry growth were also higher for eggs and fry from older broodstock. The study recommends using broodstock at least 24 months old to obtain optimal egg quantity, hatchability, and
The Sofitel resort construction project is progressing as planned as of November 14th, 2012. The foundation and framing for the main building is complete and roofing is underway. Electrical and plumbing rough-ins are in process for the first floor with drywall installation beginning on the second level.
IOSR Journal of Electronics and Communication Engineering(IOSR-JECE) is an open access international journal that provides rapid publication (within a month) of articles in all areas of electronics and communication engineering and its applications. The journal welcomes publications of high quality papers on theoretical developments and practical applications in electronics and communication engineering. Original research papers, state-of-the-art reviews, and high quality technical notes are invited for publications.
Efficiency of Prediction Algorithms for Mining Biological DatabasesIOSR Journals
This document analyzes the efficiency of various prediction algorithms for mining biological databases. It discusses prediction through mining biological databases to identify disease risks. It then evaluates several prediction algorithms (ZeroR, OneR, JRip, PART, Decision Table) on a breast cancer dataset using measures like accuracy, sensitivity, specificity, and predictive values. The results show that the JRip and PART algorithms generally had the highest accuracy rates, around 70%, while ZeroR had the lowest accuracy. However, ZeroR had a perfect positive predictive value. The study aims to assess the most efficient algorithms for predictive mining of biological data.
Prototyping the Future Potentials of Location Based Services in the Realm of ...IOSR Journals
This document discusses prototyping future potentials of location-based services in e-governance. It begins by defining ubiquitous computing, context-aware applications, and location-based services. It then outlines two classes of LBS - pull, where users actively request location-based data, and push, where networks proactively provide information to users. The document also describes the key components of an LBS communication model, including user devices, communication networks, positioning systems, application servers, and data servers. Lastly, it discusses challenges with incorporating location and context into existing governance models.
GPT discusses various ways that language models can acquire external information as context to improve responses, including:
1) Querying search engines using APIs to incorporate search results into responses
2) Recognizing tasks from prompts and accessing databases or APIs to incorporate relevant information
3) Summarizing, calculating, and verifying information from external sources to provide more accurate answers
This document describes the 2017 IEEE CIG Game Data Mining Competition hosted by Sejong University in South Korea. The competition provided access to game log data from Blade & Soul to predict player churn and survival time. There were two tracks - one for churn prediction and one for survival analysis. 13 teams participated in track 1 and 5 teams in track 2. The winning team YD from Japan used ensemble methods like LSTM, DNN and extra trees for track 1 and ensemble conditional inference trees for track 2. Other top techniques included random forest and light gradient boosting machines. The competition helped advance game data mining research by providing a large real-world dataset.
As part of the team delivering Snap, an open telemetry framework, I've run through dozens of use cases where gathering disparate metrics from services can roll up into meaningful diagrams for operations engineers and developers alike. I will introduce you to the concept of telemetry by talking through the basics then using Snap's plugin model to collect, process and publish these measurements into meaningful graphs using open source tools.
ML Infra @ Spotify: Lessons Learned - Romain Yon - NYC ML MeetupRomain Yon
Original event: https://www.meetup.com/NYC-Machine-Learning/events/256605862/
--
"Doing large scale ML in production is hard" – Everyone who's tried
This talk is focussed on ML Systems. Especially the less obvious pitfalls, which have caused us troubles at Spotify.
This talk assumes a certain level of familiarity with ML: You'll get the most out of if you've some experience with applied ML, ideally on production systems.
Romain Yon is a Staff ML Engineer at Spotify. Over the years, Romain has worked on many of the core ML systems that power Spotify today (Music Recommendation, Catalog Quality, Search Ranking, Ads, ..).
During the past year, Romain has been mostly focusing on designing reusable ML Infrastructure that can be leveraged throughout Spotify.
Prior to Spotify, Romain co-founded the startup https://linkurio.us while getting his MSc in ML from Georgia Tech.
Petascale Analytics - The World of Big Data Requires Big AnalyticsHeiko Joerg Schick
The document discusses big data and analytics technologies. It describes how new technologies like Hadoop and MapReduce enable processing of extremely large datasets. It also discusses future technologies like exascale computing and storage class memory that will be needed to manage increasing data volumes and support real-time analytics.
Kusto (Azure Data Explorer) Training for R&D - January 2019 Tal Bar-Zvi
This document summarizes a training presentation on Azure Data Explorer (Kusto). The presentation covered:
1. An introduction to Kusto as a new way to analyze big data and logs that is fast, easy to use, and helps understand services quickly.
2. Examples of different Kusto query types including counting, filtering, aggregating, rendering graphs, and combining queries.
3. How Kusto is used at Taboola to analyze HTTP logs from their CDN, including database sizes and architecture.
4. Additional features like dashboards, alerts, notebooks, and community resources for learning more.
5. A question and answer session addressing common questions about Kusto.
Data Science, Personalisation & Product managementBhaskar Krishnan
Does Data Matter?
Why are we discussing Data & Data Science?
Why is it relevant to Product Management?
What is Identity?
How do we understand users?
How do we Personalise user experiences?
What is Risk and Trust & Safety?
This document discusses how game developers can use data collection tools to improve their games. It provides examples of how data collection can help with troubleshooting design problems, performance testing, detecting usage patterns, and adjusting difficulty levels. The document also outlines the key elements of rational game design, which uses quantifiable data to balance challenge and fun. It recommends collecting a variety of usage data, looking for patterns, modifying game elements, and repeating the process in an ongoing feedback loop to craft the ideal player experience. Finally, it promotes the tools on id.net for tracking user registrations, engagement, and implementing achievements/leaderboards to further incentivize player participation.
AI, Search, and the Disruption of Knowledge ManagementTrey Grainger
Trey Grainger discussed how search has evolved from basic keyword search to more advanced capabilities like understanding user intent, providing personalized search, and augmented search using machine learning and AI. He explained the concept of "reflected intelligence" where user interactions with search results are used to continuously improve search quality through techniques like signals boosting, learning to rank, and collaborative filtering. Grainger also outlined how knowledge graphs can help power semantic search by modeling relationships between entities to better understand queries and provide more relevant results.
Jamming Attacks Prevention in Wireless Networks Using Packet Hiding MethodsIOSR Journals
This document discusses selective jamming attacks in wireless networks and methods to prevent them. It begins by introducing the open nature of wireless networks leaves them vulnerable to jamming attacks. It then discusses different types of jamming attacks and notes that selective jamming, which targets specific important packets, is more effective than continuous jamming. The document proposes using cryptographic techniques like commitment schemes and puzzles combined with physical layer parameters to prevent real-time packet classification and selective jamming. It reviews related work on jamming attacks and defenses. Finally, it outlines the problem statement, system model, and the contribution of using symmetric encryption and resisting brute force block encryption attacks to reduce jamming through packet hiding.
Classification By Clustering Based On Adjusted ClusterIOSR Journals
This document summarizes a research paper that proposes a new technique called "Classification by Clustering" (CbC) to define decision trees based on cluster analysis. The technique is tested on two large HR datasets. CbC involves running a clustering algorithm on the dataset without using the target variable, calculating the target variable distribution in each cluster, setting a threshold to classify entities, fine-tuning the results by weighting important attributes, and testing the results on new data. The paper finds that CbC can provide meaningful decision rules even when conventional decision trees fail to do so, and in some cases CbC performs better. A new evaluation measure called Weighted Group Score is also introduced to assess models when conventional measures cannot be used
A Novel PSNR-B Approach for Evaluating the Quality of De-blocked Images IOSR Journals
This document discusses evaluating the quality of deblocked images using different quality assessment metrics. It proposes a new metric called PSNR-B that includes a blocking effect factor in PSNR calculations. The document compares PSNR-B to PSNR and SSIM metrics. It studies the effect of quantization step size on measured image quality and analyzes how deblocking algorithms like lowpass filtering can reduce blocking artifacts but also introduce new distortions. Simulation results show PSNR-B correlates better than PSNR with subjective quality judgments of deblocked images.
A New Theoretical Approach to Location Based Power Aware RoutingIOSR Journals
This document proposes a new theoretical approach to location based power aware routing in mobile ad hoc networks (MANETs). It aims to extend the network lifetime by improving power utilization during routing. The approach uses nodes' location information, remaining battery power, and bandwidth status to assign link stability and select routes with lower "uptime values" and minimum bandwidth over time. This is hypothesized to better utilize nodes' power sources and bandwidth. The document outlines calculating a root up time factor for each node based on its power backup and required power, and only using nodes with maximum backup. It concludes future work will design and validate a new protocol based on this approach.
Combining both Plug-in Vehicles and Renewable Energy Resources for Unit Commi...IOSR Journals
This document presents a study that combines plug-in electric vehicles with vehicle-to-grid technology (V2G), renewable energy resources like wind and solar, and existing power plants, to optimize unit commitment in smart grids. The goal is to minimize total costs and emissions. A genetic algorithm is used to optimize scheduling of generation units, V2G vehicles providing spinning reserves, and time-varying renewable sources over a 24-hour period to meet load demand at lowest cost while satisfying constraints. Simulation results validate that integrating V2G and renewable energy sources can effectively reduce costs and emissions for the smart grid.
Requirements and Challenges for Securing Cloud Applications and ServicesIOSR Journals
This document discusses the requirements and challenges for securing cloud applications and services. It begins with an abstract that introduces cloud computing security as complex due to many factors. The document then provides context on cloud computing architectural frameworks and models to help evaluate security risks when adopting cloud services. It discusses key aspects of cloud architecture like deployment models, service models, and multi-tenancy that impact security. Understanding these relationships is important for informed risk management decisions regarding cloud adoption strategies.
Effect of Age of Spawned Catfish (Clarias Gariepinus) Broodstock on Quantity ...IOSR Journals
This study examined the effect of broodstock age on egg and milt production in catfish (Clarias gariepinus) and subsequent fry growth. Older female broodstock (24-30 months) produced more eggs (260-300g) than younger females (150-160g at 15-18 months). Similarly, older males (24-32 months) produced more milt (280-320g) than younger males (200-240g at 16-20 months). Hatchability and fry growth were also higher for eggs and fry from older broodstock. The study recommends using broodstock at least 24 months old to obtain optimal egg quantity, hatchability, and
The Sofitel resort construction project is progressing as planned as of November 14th, 2012. The foundation and framing for the main building is complete and roofing is underway. Electrical and plumbing rough-ins are in process for the first floor with drywall installation beginning on the second level.
IOSR Journal of Electronics and Communication Engineering(IOSR-JECE) is an open access international journal that provides rapid publication (within a month) of articles in all areas of electronics and communication engineering and its applications. The journal welcomes publications of high quality papers on theoretical developments and practical applications in electronics and communication engineering. Original research papers, state-of-the-art reviews, and high quality technical notes are invited for publications.
Efficiency of Prediction Algorithms for Mining Biological DatabasesIOSR Journals
This document analyzes the efficiency of various prediction algorithms for mining biological databases. It discusses prediction through mining biological databases to identify disease risks. It then evaluates several prediction algorithms (ZeroR, OneR, JRip, PART, Decision Table) on a breast cancer dataset using measures like accuracy, sensitivity, specificity, and predictive values. The results show that the JRip and PART algorithms generally had the highest accuracy rates, around 70%, while ZeroR had the lowest accuracy. However, ZeroR had a perfect positive predictive value. The study aims to assess the most efficient algorithms for predictive mining of biological data.
Prototyping the Future Potentials of Location Based Services in the Realm of ...IOSR Journals
This document discusses prototyping future potentials of location-based services in e-governance. It begins by defining ubiquitous computing, context-aware applications, and location-based services. It then outlines two classes of LBS - pull, where users actively request location-based data, and push, where networks proactively provide information to users. The document also describes the key components of an LBS communication model, including user devices, communication networks, positioning systems, application servers, and data servers. Lastly, it discusses challenges with incorporating location and context into existing governance models.
GPT discusses various ways that language models can acquire external information as context to improve responses, including:
1) Querying search engines using APIs to incorporate search results into responses
2) Recognizing tasks from prompts and accessing databases or APIs to incorporate relevant information
3) Summarizing, calculating, and verifying information from external sources to provide more accurate answers
This document describes the 2017 IEEE CIG Game Data Mining Competition hosted by Sejong University in South Korea. The competition provided access to game log data from Blade & Soul to predict player churn and survival time. There were two tracks - one for churn prediction and one for survival analysis. 13 teams participated in track 1 and 5 teams in track 2. The winning team YD from Japan used ensemble methods like LSTM, DNN and extra trees for track 1 and ensemble conditional inference trees for track 2. Other top techniques included random forest and light gradient boosting machines. The competition helped advance game data mining research by providing a large real-world dataset.
As part of the team delivering Snap, an open telemetry framework, I've run through dozens of use cases where gathering disparate metrics from services can roll up into meaningful diagrams for operations engineers and developers alike. I will introduce you to the concept of telemetry by talking through the basics then using Snap's plugin model to collect, process and publish these measurements into meaningful graphs using open source tools.
ML Infra @ Spotify: Lessons Learned - Romain Yon - NYC ML MeetupRomain Yon
Original event: https://www.meetup.com/NYC-Machine-Learning/events/256605862/
--
"Doing large scale ML in production is hard" – Everyone who's tried
This talk is focussed on ML Systems. Especially the less obvious pitfalls, which have caused us troubles at Spotify.
This talk assumes a certain level of familiarity with ML: You'll get the most out of if you've some experience with applied ML, ideally on production systems.
Romain Yon is a Staff ML Engineer at Spotify. Over the years, Romain has worked on many of the core ML systems that power Spotify today (Music Recommendation, Catalog Quality, Search Ranking, Ads, ..).
During the past year, Romain has been mostly focusing on designing reusable ML Infrastructure that can be leveraged throughout Spotify.
Prior to Spotify, Romain co-founded the startup https://linkurio.us while getting his MSc in ML from Georgia Tech.
Petascale Analytics - The World of Big Data Requires Big AnalyticsHeiko Joerg Schick
The document discusses big data and analytics technologies. It describes how new technologies like Hadoop and MapReduce enable processing of extremely large datasets. It also discusses future technologies like exascale computing and storage class memory that will be needed to manage increasing data volumes and support real-time analytics.
Kusto (Azure Data Explorer) Training for R&D - January 2019 Tal Bar-Zvi
This document summarizes a training presentation on Azure Data Explorer (Kusto). The presentation covered:
1. An introduction to Kusto as a new way to analyze big data and logs that is fast, easy to use, and helps understand services quickly.
2. Examples of different Kusto query types including counting, filtering, aggregating, rendering graphs, and combining queries.
3. How Kusto is used at Taboola to analyze HTTP logs from their CDN, including database sizes and architecture.
4. Additional features like dashboards, alerts, notebooks, and community resources for learning more.
5. A question and answer session addressing common questions about Kusto.
Data Science, Personalisation & Product managementBhaskar Krishnan
Does Data Matter?
Why are we discussing Data & Data Science?
Why is it relevant to Product Management?
What is Identity?
How do we understand users?
How do we Personalise user experiences?
What is Risk and Trust & Safety?
This document discusses how game developers can use data collection tools to improve their games. It provides examples of how data collection can help with troubleshooting design problems, performance testing, detecting usage patterns, and adjusting difficulty levels. The document also outlines the key elements of rational game design, which uses quantifiable data to balance challenge and fun. It recommends collecting a variety of usage data, looking for patterns, modifying game elements, and repeating the process in an ongoing feedback loop to craft the ideal player experience. Finally, it promotes the tools on id.net for tracking user registrations, engagement, and implementing achievements/leaderboards to further incentivize player participation.
AI, Search, and the Disruption of Knowledge ManagementTrey Grainger
Trey Grainger discussed how search has evolved from basic keyword search to more advanced capabilities like understanding user intent, providing personalized search, and augmented search using machine learning and AI. He explained the concept of "reflected intelligence" where user interactions with search results are used to continuously improve search quality through techniques like signals boosting, learning to rank, and collaborative filtering. Grainger also outlined how knowledge graphs can help power semantic search by modeling relationships between entities to better understand queries and provide more relevant results.
詹剑锋:Big databench—benchmarking big data systemshdhappy001
This document discusses BigDataBench, an open source project for big data benchmarking. BigDataBench includes six real-world data sets and 19 workloads that cover common big data applications and preserve the four V's of big data. The workloads were chosen to represent typical application domains like search engines, social networks, and e-commerce. BigDataBench aims to provide a standardized benchmark for evaluating big data systems, architectures, and software stacks. It has been used in several case studies for workload characterization and performance evaluation of different hardware platforms for big data workloads.
詹剑锋:Big databench—benchmarking big data systemshdhappy001
This document discusses BigDataBench, an open source project for big data benchmarking. BigDataBench includes six real-world data sets and 19 workloads that cover common big data applications and preserve the four V's of big data. The workloads were chosen to represent typical application domains like search engines, social networks, and e-commerce. BigDataBench aims to provide a standardized benchmark for evaluating big data systems, architectures, and software stacks. It has been used in several case studies for workload characterization and evaluating the performance and energy efficiency of different hardware platforms for big data workloads.
The document discusses efficient exploitation of remote sensing data. It summarizes Grega Milcinksi's presentation on Sentinel Hub, a platform for accessing and processing satellite data. It notes that large volumes of remote sensing data are created daily but pre-processing into "data cubes" limits flexibility. The document recommends processing data on-demand using cloud computing. Sentinel Hub is highlighted as an example that provides open data access through APIs and applications using AWS services. It processes over 50 million requests per month from various data sources. The document concludes that public data should be openly available in the cloud and reasonable business models are needed for commercial data.
This document discusses future trends in the IT business in the US and Kerala, with a focus on healthcare. It notes that the $1.7 trillion IT business is growing at 7.1-8.7% annually and covers areas like wireless, UC, video, cloud, smart technologies, and social media. It also discusses how IT is enabling new areas like the internet of things, analytics, and smarter villages. Specifically for healthcare, it discusses innovations in areas like home healthcare, wellness, reputation management, and free cloud-based EMR systems.
SmartData Webinar: Applying Neocortical Research to Streaming AnalyticsDATAVERSITY
We are witnessing an explosion of sensors and machine generated data. Every server, every building, and every device generates a continuous stream of information that is ever changing and potentially valuable. The existing big data paradigm requires storing data for batch analysis, and extensive modeling by a human expert, prior to deployment. This is incredibly inefficient and cannot scale.
In this webinar, Ahmad will describe a new paradigm for streaming data algorithms, based on recent neuroscience findings and on the computational properties of the neocortex. These systems are highly automated, adapt to changing statistics, and naturally deal with temporal data streams. Many of the core ideas have been implemented in the open source project NuPIC, and validated in commercial anomaly detection and predictive maintenance applications. Given the massive increase in the number of data sources, a general-purpose automated approach is the only scalable way to effectively analyze and act on continuously streaming information.
Mastering the Game - Big Data and GamificationPete Baikins
How best to leverage big data through the use of gamification to engage, empower and up-skill employees. Learning from how games handle big data will help us play better at the game of work. You’ll hear the ways big data are currently combined with gamification, recommendations for doing this well and see a process for successfully combining gamification and big data in your own systems and processes. Talk delivered by Pete Jenkins at Gamification Turkey in Istanbul on 16th November 2017.
This presentation will provide insight on the phenomenon and emerging trend that is ChatGPT.
It will elaborate on its history, usage, workings, popularity and usefulness in social media marketing.
This presentation will provide insight on the phenomenon and emerging trend that is ChatGPT.
It will elaborate on its history, usage, workings, popularity and usefulness in social media marketing.
Similar to Get the Google Feeling! Supporting Users in Finding Relevant Sources (20)
Focused Exploration of Geospatial Context on Linked Open DataThomas Gottron
Talk at IESD 2014 workshop in Riva del Garda (at ISWC).
Abstract The Linked Open Data cloud provides a wide range of different types of information which are interlinked and connected. When a user or application is interested in specific types of information under time constraints it is best to ex- plore this vast knowledge network in a focused and directed way. In this paper we address the novel task of focused exploration of Linked Open Data for geospatial resources, helping journalists in real-time during breaking news stories to find contextual geospatial information related to geoparsed content. After formalising the task of focused exploration, we present and evaluate five approaches based on three different paradigms. Our results on a dataset with 425,338 entities show that focused exploration on the Linked Data cloud is feasible and can be implemented at very high levels of accuracy of more than 98%.
Leveraging the Web of Data: Managing, Analysing and Making Use of Linked Open...Thomas Gottron
The intensive growth of the Linked Open Data (LOD) Cloud has spawned a web of data where a multitude of data sources provides huge amounts of valuable information across different domains. Nowadays, when accessing and using Linked Data more and more often the challenging question is not so much whether there is relevant data available, but rather where it can be found, how it is structured and to make best use of it.
I this lecture I will start with giving a brief introduction to the concepts underlying LOD. Then I will focus on three aspects of current research:
(1) Managing Linked Data. Index structures play an important role for making use of the information in LOD cloud. I will give an overview of indexing approaches, present algorithms and discuss the ideas behind the index structures.
(2) Analysing Linked Data. I will present methods for analysing various aspects of LOD. From an information theoretic analysis for measuring structural redundancy, over formal concept analysis for identifying alternative declarative descriptions to a dynamics analysis for capturing the evolution of Linked Data sources.
(3) Making Use of Linked Data. Finally I will give a brief overview and outlook on where the presented techniques and approaches are of practical relevance in applications.
(Talk at the IRSS summerschool 2014 in Athens)
Perplexity of Index Models over Evolving Linked Data Thomas Gottron
ESWC presentation on the stability of 12 different index models for linked data. Provides a formalisation of the index models as well as stability evaluation based on data distributions and information theoretic metrics.
Making Use of the Linked Data Cloud: The Role of Index StructuresThomas Gottron
The intensive growth of the Linked Open Data Cloud has spawned a web of data where a multitude of data sources provides huge amounts of valuable information across different domains. Nowadays, when accessing and using Linked Data more and more often the challenging question is not so much whether there is relevant data available, but rather where it can be found and how it is structured. Thus, index structures play an important role for making use of the information in LOD cloud. In this talk I will address three aspects of Linked Data index structures: (1) a high level view and categorization of indices structures and how they can be queried and explored, (2) approaches for building index structures and the need to maintain them and (3) some example applications which greatly benefit from indices over linked data.
Challenges in Managing Online Business CommunitiesThomas Gottron
- Online business communities are a valuable asset for companies like SAP and IBM, but require appropriate metrics to manage their large scale and high volumes of activity.
- Effective metrics track content, structure, behavior, and dynamics of the communities over time to understand risk and inform management strategies.
- A framework is needed that embeds various metrics into a comprehensive approach for monitoring community risks and developing treatment plans.
ESWC 2013: A Systematic Investigation of Explicit and Implicit Schema Informa...Thomas Gottron
The document presents a method to analyze the redundancy of schema information on the Linked Open Data cloud. It examines the entropy and conditional entropy of type and property distributions across several LOD datasets. The results show that properties provide more informative schema information than types, and indicate types better than types indicate properties. There is generally high redundancy between types and properties, ranging from 63-88% on the analyzed segments of the LOD cloud. Future work could analyze schema information at the data provider level and over time.
Challenging Retrieval Scenarios: Social Media and Linked Open DataThomas Gottron
Invited talk given in April 2012 at USI in Lugano at the IR research group of Fabio Crestani. Review of the work on Interestingness on Twitter and schema based indices on Linked Open Data (SchemEX).
Finding Good URLs: Aligning Entities in Knowledge Bases with Public Web Docum...Thomas Gottron
This document summarizes a workshop on aligning entities in knowledge bases with representations on the public web. It presents an experimental evaluation of using label search, exploiting link structure, and type filtering to map 100 entities from knowledge bases to URLs on the public web. The best performing methods were found to be label search and focused HITS, and adding type filtering improved results for all methods. Next steps include further investigating domain-dependent performance.
Unlocking the mysteries of reproduction: Exploring fecundity and gonadosomati...AbdullaAlAsif1
The pygmy halfbeak Dermogenys colletei, is known for its viviparous nature, this presents an intriguing case of relatively low fecundity, raising questions about potential compensatory reproductive strategies employed by this species. Our study delves into the examination of fecundity and the Gonadosomatic Index (GSI) in the Pygmy Halfbeak, D. colletei (Meisner, 2001), an intriguing viviparous fish indigenous to Sarawak, Borneo. We hypothesize that the Pygmy halfbeak, D. colletei, may exhibit unique reproductive adaptations to offset its low fecundity, thus enhancing its survival and fitness. To address this, we conducted a comprehensive study utilizing 28 mature female specimens of D. colletei, carefully measuring fecundity and GSI to shed light on the reproductive adaptations of this species. Our findings reveal that D. colletei indeed exhibits low fecundity, with a mean of 16.76 ± 2.01, and a mean GSI of 12.83 ± 1.27, providing crucial insights into the reproductive mechanisms at play in this species. These results underscore the existence of unique reproductive strategies in D. colletei, enabling its adaptation and persistence in Borneo's diverse aquatic ecosystems, and call for further ecological research to elucidate these mechanisms. This study lends to a better understanding of viviparous fish in Borneo and contributes to the broader field of aquatic ecology, enhancing our knowledge of species adaptations to unique ecological challenges.
When I was asked to give a companion lecture in support of ‘The Philosophy of Science’ (https://shorturl.at/4pUXz) I decided not to walk through the detail of the many methodologies in order of use. Instead, I chose to employ a long standing, and ongoing, scientific development as an exemplar. And so, I chose the ever evolving story of Thermodynamics as a scientific investigation at its best.
Conducted over a period of >200 years, Thermodynamics R&D, and application, benefitted from the highest levels of professionalism, collaboration, and technical thoroughness. New layers of application, methodology, and practice were made possible by the progressive advance of technology. In turn, this has seen measurement and modelling accuracy continually improved at a micro and macro level.
Perhaps most importantly, Thermodynamics rapidly became a primary tool in the advance of applied science/engineering/technology, spanning micro-tech, to aerospace and cosmology. I can think of no better a story to illustrate the breadth of scientific methodologies and applications at their best.
Remote Sensing and Computational, Evolutionary, Supercomputing, and Intellige...University of Maribor
Slides from talk:
Aleš Zamuda: Remote Sensing and Computational, Evolutionary, Supercomputing, and Intelligent Systems.
11th International Conference on Electrical, Electronics and Computer Engineering (IcETRAN), Niš, 3-6 June 2024
Inter-Society Networking Panel GRSS/MTT-S/CIS Panel Session: Promoting Connection and Cooperation
https://www.etran.rs/2024/en/home-english/
The ability to recreate computational results with minimal effort and actionable metrics provides a solid foundation for scientific research and software development. When people can replicate an analysis at the touch of a button using open-source software, open data, and methods to assess and compare proposals, it significantly eases verification of results, engagement with a diverse range of contributors, and progress. However, we have yet to fully achieve this; there are still many sociotechnical frictions.
Inspired by David Donoho's vision, this talk aims to revisit the three crucial pillars of frictionless reproducibility (data sharing, code sharing, and competitive challenges) with the perspective of deep software variability.
Our observation is that multiple layers — hardware, operating systems, third-party libraries, software versions, input data, compile-time options, and parameters — are subject to variability that exacerbates frictions but is also essential for achieving robust, generalizable results and fostering innovation. I will first review the literature, providing evidence of how the complex variability interactions across these layers affect qualitative and quantitative software properties, thereby complicating the reproduction and replication of scientific studies in various fields.
I will then present some software engineering and AI techniques that can support the strategic exploration of variability spaces. These include the use of abstractions and models (e.g., feature models), sampling strategies (e.g., uniform, random), cost-effective measurements (e.g., incremental build of software configurations), and dimensionality reduction methods (e.g., transfer learning, feature selection, software debloating).
I will finally argue that deep variability is both the problem and solution of frictionless reproducibility, calling the software science community to develop new methods and tools to manage variability and foster reproducibility in software systems.
Exposé invité Journées Nationales du GDR GPL 2024
Current Ms word generated power point presentation covers major details about the micronuclei test. It's significance and assays to conduct it. It is used to detect the micronuclei formation inside the cells of nearly every multicellular organism. It's formation takes place during chromosomal sepration at metaphase.
ESPP presentation to EU Waste Water Network, 4th June 2024 “EU policies driving nutrient removal and recycling
and the revised UWWTD (Urban Waste Water Treatment Directive)”
Immersive Learning That Works: Research Grounding and Paths ForwardLeonel Morgado
We will metaverse into the essence of immersive learning, into its three dimensions and conceptual models. This approach encompasses elements from teaching methodologies to social involvement, through organizational concerns and technologies. Challenging the perception of learning as knowledge transfer, we introduce a 'Uses, Practices & Strategies' model operationalized by the 'Immersive Learning Brain' and ‘Immersion Cube’ frameworks. This approach offers a comprehensive guide through the intricacies of immersive educational experiences and spotlighting research frontiers, along the immersion dimensions of system, narrative, and agency. Our discourse extends to stakeholders beyond the academic sphere, addressing the interests of technologists, instructional designers, and policymakers. We span various contexts, from formal education to organizational transformation to the new horizon of an AI-pervasive society. This keynote aims to unite the iLRN community in a collaborative journey towards a future where immersive learning research and practice coalesce, paving the way for innovative educational research and practice landscapes.
EWOCS-I: The catalog of X-ray sources in Westerlund 1 from the Extended Weste...Sérgio Sacani
Context. With a mass exceeding several 104 M⊙ and a rich and dense population of massive stars, supermassive young star clusters
represent the most massive star-forming environment that is dominated by the feedback from massive stars and gravitational interactions
among stars.
Aims. In this paper we present the Extended Westerlund 1 and 2 Open Clusters Survey (EWOCS) project, which aims to investigate
the influence of the starburst environment on the formation of stars and planets, and on the evolution of both low and high mass stars.
The primary targets of this project are Westerlund 1 and 2, the closest supermassive star clusters to the Sun.
Methods. The project is based primarily on recent observations conducted with the Chandra and JWST observatories. Specifically,
the Chandra survey of Westerlund 1 consists of 36 new ACIS-I observations, nearly co-pointed, for a total exposure time of 1 Msec.
Additionally, we included 8 archival Chandra/ACIS-S observations. This paper presents the resulting catalog of X-ray sources within
and around Westerlund 1. Sources were detected by combining various existing methods, and photon extraction and source validation
were carried out using the ACIS-Extract software.
Results. The EWOCS X-ray catalog comprises 5963 validated sources out of the 9420 initially provided to ACIS-Extract, reaching a
photon flux threshold of approximately 2 × 10−8 photons cm−2
s
−1
. The X-ray sources exhibit a highly concentrated spatial distribution,
with 1075 sources located within the central 1 arcmin. We have successfully detected X-ray emissions from 126 out of the 166 known
massive stars of the cluster, and we have collected over 71 000 photons from the magnetar CXO J164710.20-455217.
Phenomics assisted breeding in crop improvementIshaGoswami9
As the population is increasing and will reach about 9 billion upto 2050. Also due to climate change, it is difficult to meet the food requirement of such a large population. Facing the challenges presented by resource shortages, climate
change, and increasing global population, crop yield and quality need to be improved in a sustainable way over the coming decades. Genetic improvement by breeding is the best way to increase crop productivity. With the rapid progression of functional
genomics, an increasing number of crop genomes have been sequenced and dozens of genes influencing key agronomic traits have been identified. However, current genome sequence information has not been adequately exploited for understanding
the complex characteristics of multiple gene, owing to a lack of crop phenotypic data. Efficient, automatic, and accurate technologies and platforms that can capture phenotypic data that can
be linked to genomics information for crop improvement at all growth stages have become as important as genotyping. Thus,
high-throughput phenotyping has become the major bottleneck restricting crop breeding. Plant phenomics has been defined as the high-throughput, accurate acquisition and analysis of multi-dimensional phenotypes
during crop growing stages at the organism level, including the cell, tissue, organ, individual plant, plot, and field levels. With the rapid development of novel sensors, imaging technology,
and analysis methods, numerous infrastructure platforms have been developed for phenotyping.
The binding of cosmological structures by massless topological defectsSérgio Sacani
Assuming spherical symmetry and weak field, it is shown that if one solves the Poisson equation or the Einstein field
equations sourced by a topological defect, i.e. a singularity of a very specific form, the result is a localized gravitational
field capable of driving flat rotation (i.e. Keplerian circular orbits at a constant speed for all radii) of test masses on a thin
spherical shell without any underlying mass. Moreover, a large-scale structure which exploits this solution by assembling
concentrically a number of such topological defects can establish a flat stellar or galactic rotation curve, and can also deflect
light in the same manner as an equipotential (isothermal) sphere. Thus, the need for dark matter or modified gravity theory is
mitigated, at least in part.
The debris of the ‘last major merger’ is dynamically youngSérgio Sacani
The Milky Way’s (MW) inner stellar halo contains an [Fe/H]-rich component with highly eccentric orbits, often referred to as the
‘last major merger.’ Hypotheses for the origin of this component include Gaia-Sausage/Enceladus (GSE), where the progenitor
collided with the MW proto-disc 8–11 Gyr ago, and the Virgo Radial Merger (VRM), where the progenitor collided with the
MW disc within the last 3 Gyr. These two scenarios make different predictions about observable structure in local phase space,
because the morphology of debris depends on how long it has had to phase mix. The recently identified phase-space folds in Gaia
DR3 have positive caustic velocities, making them fundamentally different than the phase-mixed chevrons found in simulations
at late times. Roughly 20 per cent of the stars in the prograde local stellar halo are associated with the observed caustics. Based
on a simple phase-mixing model, the observed number of caustics are consistent with a merger that occurred 1–2 Gyr ago.
We also compare the observed phase-space distribution to FIRE-2 Latte simulations of GSE-like mergers, using a quantitative
measurement of phase mixing (2D causticality). The observed local phase-space distribution best matches the simulated data
1–2 Gyr after collision, and certainly not later than 3 Gyr. This is further evidence that the progenitor of the ‘last major merger’
did not collide with the MW proto-disc at early times, as is thought for the GSE, but instead collided with the MW disc within
the last few Gyr, consistent with the body of work surrounding the VRM.
The debris of the ‘last major merger’ is dynamically young
Get the Google Feeling! Supporting Users in Finding Relevant Sources
1. Get the Feeling!
Supporting Users in Finding Relevant Sources
of Linked Open Data at Web-Scale
Thomas Gottron, Ansgar Scherp, Bastian Krayer, Arne Peters
2. System Support for Searchers
System involvement
automatic Hold for later
execution
monitor and (skip)
recommend
Hold for
later
execute on Area of recommended
command development
Operational
Systems
display (then)
options
none Pure user activity
User activity
Bates, M.J.: Where should the person stop and the information search interface start?
Information Processing and Management 26(5), 575–591 (1990)
Get the Google Feeling Thomas Gottron BTC 2012 2
3. System Support Helps: Query Specific Snippets
Recall Precision
Speed Satisfaction
Tombros, A., Sanderson, M.: Advantages of query biased summaries in information retrieval.
SIGIR’98. pp. 2–10 (1998)
Get the Google Feeling Thomas Gottron BTC 2012 3
4. System Support Helps: Query Suggestions
of all queries were chosen from suggestions
Find entry point Think out of the box Identify new query terms
Kelly, D., et al. Effects of popularity and quality on the usage of query suggestions during
information search. CHI '10, p 45-54, (2010)
Get the Google Feeling Thomas Gottron BTC 2012 4
8. „Under the hood“
SPARQL Generalize Select
Count
Query Retrieve
Rank
translation Datasources
Snippets
• 1 query for result set and result set size
• N queries for ranking data and snippets
Specify Select
• 2 queries per related query
Get the Google Feeling Thomas Gottron BTC 2012 8
9. Stats
Use of the complete BTC 2012 dataset
Index size
133M schema triples
224M payload triples
Commodity hardware
Data processing
LODation service provision
Index construction (15h) and optimization (5h)
Response time: < 1s on a single CPU machine
Get the Google Feeling Thomas Gottron BTC 2012 9