DN 2017 | Data product discovery - The product perspective on digital transfo...Dataconomy Media
“The world´s most valuable resource“ titles „The Economist“ in May, referring to data as the resource. With computing resources in the cloud, cheap sensors in physical products, and advanced machine learning algorithms to make use of the collected data, the relevance of data will only increase.
We believe that these developments open up new opportunities for companies to develop and profit form data products. They can include feedback loops in their existing products in order to improve them or create new products based on the usage data of an existing one. The usage data itself can even become the USP of the physical product. Machine learning can help to automate customer jobs that before required tedious input of data by the customer. They can realize more complex business models as users of a service and buyers of data might be very different.
In the talk, we will illustrate our definition of data products based on project examples. We share our insights on how to implement a data product successfully. We have adapted the Lean start-up principle in order to get data products to the market quickly while maintaining the core factors of a valuable data product.
Dr. Christoph Tempich works as Chief Data Economist at inovex GmbH, an innovative service provider for digital transformation projects. He watches over the economic aspects of digital transformation and takes a strategic view on data economics. Furthermore, he supports companies in defining data products and improving their digital business models.
Solving the BI Adoption Challenge With Report Consolidationibi
Check out the slides from a webcast with Rado Kotorov, chief innovation officer at Information Builders, on how to resolve data clutter in your organization with report consolidation.
View the webcast recording at: http://ow.ly/uzPP30alz3J
Big data approaches to estimate the impact of EU fundingData4Impact
Big Data approaches to estimate the impact of EU funding on innovation development. Presentation in STI conference, 12th September 2018, Leiden by Dr Lukas Pukelis and Vilius Stanciauskas (PPMI).
DN 2017 | Data product discovery - The product perspective on digital transfo...Dataconomy Media
“The world´s most valuable resource“ titles „The Economist“ in May, referring to data as the resource. With computing resources in the cloud, cheap sensors in physical products, and advanced machine learning algorithms to make use of the collected data, the relevance of data will only increase.
We believe that these developments open up new opportunities for companies to develop and profit form data products. They can include feedback loops in their existing products in order to improve them or create new products based on the usage data of an existing one. The usage data itself can even become the USP of the physical product. Machine learning can help to automate customer jobs that before required tedious input of data by the customer. They can realize more complex business models as users of a service and buyers of data might be very different.
In the talk, we will illustrate our definition of data products based on project examples. We share our insights on how to implement a data product successfully. We have adapted the Lean start-up principle in order to get data products to the market quickly while maintaining the core factors of a valuable data product.
Dr. Christoph Tempich works as Chief Data Economist at inovex GmbH, an innovative service provider for digital transformation projects. He watches over the economic aspects of digital transformation and takes a strategic view on data economics. Furthermore, he supports companies in defining data products and improving their digital business models.
Solving the BI Adoption Challenge With Report Consolidationibi
Check out the slides from a webcast with Rado Kotorov, chief innovation officer at Information Builders, on how to resolve data clutter in your organization with report consolidation.
View the webcast recording at: http://ow.ly/uzPP30alz3J
Big data approaches to estimate the impact of EU fundingData4Impact
Big Data approaches to estimate the impact of EU funding on innovation development. Presentation in STI conference, 12th September 2018, Leiden by Dr Lukas Pukelis and Vilius Stanciauskas (PPMI).
The true meaning of data by Maciej Dabrowski Altocloud
Maciej Dabrowski, Chief Data Scientist of Altocloud was the keynote speaker at the OMiG Digital Summit in January 2016. Maciej presented 'The true meaning of data' - illustrating both the important and fun aspects of data analytics.
Christoph Tempich, Thomas Leitermann from inovex: "What are data products and...Dataconomy Media
Dr. Christoph Tempich, Head of Product Discovery & Ownership and Thomas Leitermann, Product Owner at inovex GmbH: "What are data products and why are they different from other products?"
15x data growth in myThings’ real time ad campaignsMathias Golombek
EXASolution handles 15x data growth in myThings’ real time ad campaigns
Consumers visit Websites for different reasons, looking for different products and services. But 96% of them leave without actually buying or booking anything. To rekindle the interest of these anonymous users, marketers use dynamic banners with customized product recommendations to advertise on other sites. This is the principle that myThings, a programmatic ad solutions provider for the largest eCommerce brands, calls “dynamic personalized retargeting” – a data-driven advertising solution that personalizes content on banners in real time – whether on desktop, mobile or Facebook
- to increase click and conversion rates by more than 150%. To achieve this, myThings offers its customers tailored retargeting campaigns based on data from the analytical in-memory database EXASolution.
Best practice for data interoperabilityCRMT Digital
IT should assume responsibility for data interoperability in an age when marketing campaigns generate and depend upon top-quality information from increasingly diverse sources and technical marketing training.
Data interoperability is becoming essential for organisations who want to run successful marketing campaigns. As strategies evolve, an increasing amount of data is arriving from multiple sources, which needs to be managed, organised and used responsively. With this increase in data, Marketing are moving in on IT spend to narrow silos, but this runs the risk of inconsistencies across data handling. We've created a Slideshare that provides organisations with best practices for data interoperability, with a focus on why providing technical marketing and training to Sales and Marketing is so important.
"Implementing AI for New Business Models and Efficiencies" - Parag Shrivastav...Grid Dynamics
Dynamic talks Seattle: Artificial Intelligence (AI) and data are foundational to ideation of new business models that bring growth and efficiencies. This is in action in pharmaceutical supply chain for reducing costs, increasing volumes, and optimizing the contracts with suppliers. The implementation involves process and tools that make data usable, and overcome the challenges of culture, ethics, data scalability, compute and engineering, and. Learn about data collection, data management, and metadata management tools implementation and modern data architecture to support them. Discuss machine learning algorithms for growth and efficiency scenarios.
Did you know that 65% of companies are not confident that their content is consistent across print, web, and mobile channels? Join InfoTrends and Quark in for a Content Automation trends eSeminar: The Top 7 Challenges Holding Back your Content.
To be updated is not enough for companies today. Organizations must be constantly watching also to the trends in order to predict and forecast the next steps for their business. The following document is a Executive Summary of the current situation but also of the more notable trends that will help to understand the basics of the Analytics Market
How To Improve Profitability & Outperform Your Competition: the Guide to Data...A.J. Riedel
Find out how adopting data-driven decision-making can reduce your risk of making costly marketing and product mistakes and improve your product sell-through in this free E-Book.
The true meaning of data by Maciej Dabrowski Altocloud
Maciej Dabrowski, Chief Data Scientist of Altocloud was the keynote speaker at the OMiG Digital Summit in January 2016. Maciej presented 'The true meaning of data' - illustrating both the important and fun aspects of data analytics.
Christoph Tempich, Thomas Leitermann from inovex: "What are data products and...Dataconomy Media
Dr. Christoph Tempich, Head of Product Discovery & Ownership and Thomas Leitermann, Product Owner at inovex GmbH: "What are data products and why are they different from other products?"
15x data growth in myThings’ real time ad campaignsMathias Golombek
EXASolution handles 15x data growth in myThings’ real time ad campaigns
Consumers visit Websites for different reasons, looking for different products and services. But 96% of them leave without actually buying or booking anything. To rekindle the interest of these anonymous users, marketers use dynamic banners with customized product recommendations to advertise on other sites. This is the principle that myThings, a programmatic ad solutions provider for the largest eCommerce brands, calls “dynamic personalized retargeting” – a data-driven advertising solution that personalizes content on banners in real time – whether on desktop, mobile or Facebook
- to increase click and conversion rates by more than 150%. To achieve this, myThings offers its customers tailored retargeting campaigns based on data from the analytical in-memory database EXASolution.
Best practice for data interoperabilityCRMT Digital
IT should assume responsibility for data interoperability in an age when marketing campaigns generate and depend upon top-quality information from increasingly diverse sources and technical marketing training.
Data interoperability is becoming essential for organisations who want to run successful marketing campaigns. As strategies evolve, an increasing amount of data is arriving from multiple sources, which needs to be managed, organised and used responsively. With this increase in data, Marketing are moving in on IT spend to narrow silos, but this runs the risk of inconsistencies across data handling. We've created a Slideshare that provides organisations with best practices for data interoperability, with a focus on why providing technical marketing and training to Sales and Marketing is so important.
"Implementing AI for New Business Models and Efficiencies" - Parag Shrivastav...Grid Dynamics
Dynamic talks Seattle: Artificial Intelligence (AI) and data are foundational to ideation of new business models that bring growth and efficiencies. This is in action in pharmaceutical supply chain for reducing costs, increasing volumes, and optimizing the contracts with suppliers. The implementation involves process and tools that make data usable, and overcome the challenges of culture, ethics, data scalability, compute and engineering, and. Learn about data collection, data management, and metadata management tools implementation and modern data architecture to support them. Discuss machine learning algorithms for growth and efficiency scenarios.
Did you know that 65% of companies are not confident that their content is consistent across print, web, and mobile channels? Join InfoTrends and Quark in for a Content Automation trends eSeminar: The Top 7 Challenges Holding Back your Content.
To be updated is not enough for companies today. Organizations must be constantly watching also to the trends in order to predict and forecast the next steps for their business. The following document is a Executive Summary of the current situation but also of the more notable trends that will help to understand the basics of the Analytics Market
How To Improve Profitability & Outperform Your Competition: the Guide to Data...A.J. Riedel
Find out how adopting data-driven decision-making can reduce your risk of making costly marketing and product mistakes and improve your product sell-through in this free E-Book.
Data driven marketing: wat zijn de kenmerkende verschillen tussen succesvolle...BBPMedia1
Er is steeds meer data beschikbaar, maar waarom lukt het sommige organisaties om dit te gebruiken als motor voor enorme groei terwijl anderen er geen meerwaarde uit weten te halen? Met DVJ Insights hebben we onderzoek gedaan onder 2.000 marketeers in 9 landen, waarin we ze alles gevraagd hebben over Data-Driven Marketing, hoe ze dat toepassen en welke uitdagingen er zijn. In deze presentatie deel ik de belangrijkste inzichten en wat organisaties ervan kunnen leren om succesvoller gebruik te maken van data.
Learn about the emerging field of big data and advanced quantitative models and how the Rady School's MS in Business Analytics program is designed to solve important business problems.
Digital-Warriors-Marketing Roadmap with Big Data AnalyticsJaysonBowden
When the data speaks, why should you listen?
Big data has become a buzzword among marketers all over the world and I am frankly sick of all the buzz without concrete value. In this context, my goal is to make Big Data real and tangible for marketers so you can realize the disruptive shift that is upon us. Every marketer is familiar with the 4 P’s of marketing (Product, Price, Place, and Promotion) so we will start our discussion in a way that begins to extend these concepts. I like to describe these as ‘The New 4 of P’s of Marketing with Big Data: Personalization, Performance, Prediction, and Privacy’
Big Data Analytics: A New Business OpportunityEdward Curry
This talk introduces Big Data analytics and how they can be used to deliver value within organisations. The talk will cover the transformational potential of creating data value chains between different sectors. Developing a Big Data analytics capability will be discussed in addition to the challenges facing the emerging data economy.
These are slides from Ellen Wagner\'s featured theme presentation Making Learning Analytics Matter in the Educational Enterprise from Blackboard World 2012, New Orleasn, LA, July 12, 2012
UX STRAT 2018 | Flying Blind On a Rocket Cycle: Pioneering Experience Centere...Joe Lamantia
After Oracle acquired Endeca, we all had to figure out what to do next. This case study describes building a learning-driven strategy capability to guide an adventurous product development group focused on the new domains of big data analytics and machine intelligence. I’ll share the outcomes of our efforts to launch new products chartered directly around customer experience value; outline the methods, tools, and perspectives that powered product discovery and strategic planning; share a framework and patterns for identifying and understanding emerging domains; and review the application of this toolkit to new situations.
Talking about Big Data generates a lot of questions; however, most of the focus is on the technologies and skills required to collect and store this volume of information as opposed to the insight that companies need to derive from it. What factors should organizations consider in order to ensure that they are capitalizing on their investments with these technologies? How do you break through business silos to enable sharing of data to increase organizational value? Leveraging his cross-industry experience at companies like The Walt Disney Company, Travelers Insurance and Demand Media, Brendan Aldrich will discuss the question of “big value” with industry examples and a particular focus on his current work to deploy a “data democracy” within the City Colleges of Chicago.
Session Discovery Topics:
• Big value - keeping an eye on the forest (assumptions, judgment and bias)
• Data democracy - increasing productivity with data transparency and open access
Similar to Master class Hristo Hadjitchonev - Aubg (20)
[Data Meetup] Data Science in Finance - Factor Models in FinanceData Science Society
In this talk Metodi Nikolov, a Quantitative Researcher, is reviewing, without being exhaustive, the usage of factor models in finance – from the simplest single factor linear regression models, through latent variables and beyond. The focus was not be put solely on stocks but rather, on exploring other data types. The hope is to give the listeners an appreciation for the different ways the models can be applied.
[Data Meetup] Data Science in Finance - Building a Quant ML pipelineData Science Society
Georgi Kirov shares a common market-neutral statistical arbitrage framework. It will help showcase the many different ways to structure a systematic research project. From data reconciliation and signal backtesting to optimization and execution, what are some principled ways to evaluate and compare ML ideas? This process inevitably depends on the characteristics of a specific strategy, for instance, if it is liquidity-taking or liquidity-making.
[Data Meetup] Data Science in Journalism - Tanbih, QCRI and MITData Science Society
Check out our Data Science Meetup devoted to Data Science in Journalism
Dr. Preslav Nakov, Principal Scientist at QCRI, presented the #Tanbih news aggregator, which makes people aware of what they are reading.
The aggregator features media profiles that show the general factuality of reporting, the degree of propagandistic content, the hyper-partisanship, the leading political ideology, the general frame of reporting, the stance with respect to various claims and topics, as well as the audience reach and the audience bias in social media. This is part of the Tanbih project, which is developed in collaboration with MIT.
Special thanks to our partners from #Ontotext, #Telelink and #Leanplum!
#DSS #DataMeetup
Vassil Lunchev, CEO of Homeheed (https://www.homeheed.com/) presented at our July Meetup how to detect fake listings using #ComputerVision and #MachineLearning.
Imagine that you have 600,000 real estate listings with a total of 5,000,000 photos. What you know is that many of these listings are fake and some of the challenges Vassil shared in his presentation how you can detect the fake ones including the approach that works, and those which do not. Apart from that, he presented what kind of additional data is necessary to detect the fake ones.
Boyan Bonev and Demir Tonchev from Gaida.AI covered the process from the very initial concept to working software. The focus of their talk at our July Meetup was put on the challenges the domain of real estate presents for some of the standard approaches and models (#CollaborativeFiltering).
In the presentation, you can find information about all the way from #DataExploration and #modeling to the nitty-gritty of getting it all up and running in a production environment.
Demir and Boyan shared lessons they learned, mistakes they made and things they are still looking to improve in Gaida.AI (https://www.gaida.ai/).
Lessons Learned: Linked Open Data implemented in 2 Use CasesData Science Society
In this presentation for the ESSnet Linked Open Statistics final event, Sergi Segiev is presenting the learned lessons from two implemented use cases with open data for finding valuable insights.
You can also refer to the presentation 'Data Reveals Corruption Practices' by Yasen Kiprov - http://bit.ly/2WsFxsP
The presentation with the topic AI methods for localization in a noisy environment, held by Ana Antonova and Kameliya Kosekova, was introduced at Robotics Days '19.
In the next slides, you can find information techniques for Robot localization in more details and several GitHub Repos on the topic.
Team Nishki consisted of 11th-graders is presenting a Hackathon ML solution to a Kaufland Airmap case for which they won a Datathon special award.
Used methodologies and algorithms: OCR, DarkFlow, YOLO
The solution can be found at:
https://www.datasciencesociety.net/datathon/kaufland-case-datathon-2019/
Team: Evgeni Dimov, Kalin Doichev, Kostadin Kostadinov and Aneta Tsvetkova
Data Science for Open Innovation in SMEs and Large CorporationsData Science Society
Latest trends in Data Science and why the open-source culture and open innovation is expanding so fast. Find more about Data Science Society, its latest activities and how they cooperate with different local communities around the world for stimulating the new forms of education. At the end of the presentation, there are the results of two business cases from a telecom company (SNA) and a German retailer (object detection), which were solved during the Data Science Society’s hackathons (Global Datathons).
Air Pollution in Sofia - Solution through Data Science by Kiwi teamData Science Society
Some of you have already know how serious is the problem with air pollution in the capital of Bulgaria, Sofia but ...
▶️Do you know how it could be solved?
Our community represented by 1800 members all around the world tried to tackle the issue at our previous #GlobalDatathon and our international #DataScience #MonthlyChallenge, part of a university program.
Team Kiwi is solving the problem by implementing algorithms and statistical methods for air pollution prediction in the next 24 hours.
#AcademiaDatathon Finlists' Solution of Crypto Datathon CaseData Science Society
Team UNWE, one of the finalists from #AcademiaDatathon will present their solution to the #cryptocurrency data case. Explore how to perform data modeling with ARIMA and Neural Network.
To learn more visit: https://bit.ly/2uhfF37
Video from the presentation: https://bit.ly/2LlaeYd
Coreference Extraction from Identric’s Documents - Solution of Datathon 2018Data Science Society
The whole NLP Data Science solution @ https://goo.gl/iEFb1L
Syntactic Parsing or Dependency Parsing is the task of recognizing a sentence and assigning a syntactic structure to it. The most widely used syntactic structure is the parse tree which can be generated using some parsing algorithms. These parse trees are useful in various applications like grammar checking or more importantly it plays a critical role in the semantic analysis stage. For example to answer the question “Who is the point guard for the LA Laker in the next game ?” we need to figure out its subject, objects, attributes to help us figure out that the user wants the point guard of the LA Lakers specifically for the next game. This was mostly the identification and extraction NLP task for team Coala at the First Global Online Datathon.
DNA Analytics - What does really goes into Sausages - Datathon2018 SolutionData Science Society
Link to whole Data Science solution: https://goo.gl/nY3iuE
The task for the Telelink case of the First Global Datathon 2018 is to obtain the complete set of genome traces found in a single food sample and ALL organisms that should not be found in the food sample. The business needs a solution to this DNA Sequence identification case for improved quality control to be utilized in supply chains supervision and health care and protection.
- by Polina Krustanova
Open Data reveals corruption practices - case from Datathon 2017Data Science Society
A practical presentation of what Chereshka did for two days, combining the Trade Register and Public Tenders data. Yasen tell us how linked data helped the team integrate and query the two sources. He will also show some interesting initial findings, including people who participate both in the tender request and in the management of the selected bidder.
Richard's aventures in two entangled wonderlandsRichard Gill
Since the loophole-free Bell experiments of 2020 and the Nobel prizes in physics of 2022, critics of Bell's work have retreated to the fortress of super-determinism. Now, super-determinism is a derogatory word - it just means "determinism". Palmer, Hance and Hossenfelder argue that quantum mechanics and determinism are not incompatible, using a sophisticated mathematical construction based on a subtle thinning of allowed states and measurements in quantum mechanics, such that what is left appears to make Bell's argument fail, without altering the empirical predictions of quantum mechanics. I think however that it is a smoke screen, and the slogan "lost in math" comes to my mind. I will discuss some other recent disproofs of Bell's theorem using the language of causality based on causal graphs. Causal thinking is also central to law and justice. I will mention surprising connections to my work on serial killer nurse cases, in particular the Dutch case of Lucia de Berk and the current UK case of Lucy Letby.
The ability to recreate computational results with minimal effort and actionable metrics provides a solid foundation for scientific research and software development. When people can replicate an analysis at the touch of a button using open-source software, open data, and methods to assess and compare proposals, it significantly eases verification of results, engagement with a diverse range of contributors, and progress. However, we have yet to fully achieve this; there are still many sociotechnical frictions.
Inspired by David Donoho's vision, this talk aims to revisit the three crucial pillars of frictionless reproducibility (data sharing, code sharing, and competitive challenges) with the perspective of deep software variability.
Our observation is that multiple layers — hardware, operating systems, third-party libraries, software versions, input data, compile-time options, and parameters — are subject to variability that exacerbates frictions but is also essential for achieving robust, generalizable results and fostering innovation. I will first review the literature, providing evidence of how the complex variability interactions across these layers affect qualitative and quantitative software properties, thereby complicating the reproduction and replication of scientific studies in various fields.
I will then present some software engineering and AI techniques that can support the strategic exploration of variability spaces. These include the use of abstractions and models (e.g., feature models), sampling strategies (e.g., uniform, random), cost-effective measurements (e.g., incremental build of software configurations), and dimensionality reduction methods (e.g., transfer learning, feature selection, software debloating).
I will finally argue that deep variability is both the problem and solution of frictionless reproducibility, calling the software science community to develop new methods and tools to manage variability and foster reproducibility in software systems.
Exposé invité Journées Nationales du GDR GPL 2024
hematic appreciation test is a psychological assessment tool used to measure an individual's appreciation and understanding of specific themes or topics. This test helps to evaluate an individual's ability to connect different ideas and concepts within a given theme, as well as their overall comprehension and interpretation skills. The results of the test can provide valuable insights into an individual's cognitive abilities, creativity, and critical thinking skills
Observation of Io’s Resurfacing via Plume Deposition Using Ground-based Adapt...Sérgio Sacani
Since volcanic activity was first discovered on Io from Voyager images in 1979, changes
on Io’s surface have been monitored from both spacecraft and ground-based telescopes.
Here, we present the highest spatial resolution images of Io ever obtained from a groundbased telescope. These images, acquired by the SHARK-VIS instrument on the Large
Binocular Telescope, show evidence of a major resurfacing event on Io’s trailing hemisphere. When compared to the most recent spacecraft images, the SHARK-VIS images
show that a plume deposit from a powerful eruption at Pillan Patera has covered part
of the long-lived Pele plume deposit. Although this type of resurfacing event may be common on Io, few have been detected due to the rarity of spacecraft visits and the previously low spatial resolution available from Earth-based telescopes. The SHARK-VIS instrument ushers in a new era of high resolution imaging of Io’s surface using adaptive
optics at visible wavelengths.
Remote Sensing and Computational, Evolutionary, Supercomputing, and Intellige...University of Maribor
Slides from talk:
Aleš Zamuda: Remote Sensing and Computational, Evolutionary, Supercomputing, and Intelligent Systems.
11th International Conference on Electrical, Electronics and Computer Engineering (IcETRAN), Niš, 3-6 June 2024
Inter-Society Networking Panel GRSS/MTT-S/CIS Panel Session: Promoting Connection and Cooperation
https://www.etran.rs/2024/en/home-english/
The use of Nauplii and metanauplii artemia in aquaculture (brine shrimp).pptxMAGOTI ERNEST
Although Artemia has been known to man for centuries, its use as a food for the culture of larval organisms apparently began only in the 1930s, when several investigators found that it made an excellent food for newly hatched fish larvae (Litvinenko et al., 2023). As aquaculture developed in the 1960s and ‘70s, the use of Artemia also became more widespread, due both to its convenience and to its nutritional value for larval organisms (Arenas-Pardo et al., 2024). The fact that Artemia dormant cysts can be stored for long periods in cans, and then used as an off-the-shelf food requiring only 24 h of incubation makes them the most convenient, least labor-intensive, live food available for aquaculture (Sorgeloos & Roubach, 2021). The nutritional value of Artemia, especially for marine organisms, is not constant, but varies both geographically and temporally. During the last decade, however, both the causes of Artemia nutritional variability and methods to improve poorquality Artemia have been identified (Loufi et al., 2024).
Brine shrimp (Artemia spp.) are used in marine aquaculture worldwide. Annually, more than 2,000 metric tons of dry cysts are used for cultivation of fish, crustacean, and shellfish larva. Brine shrimp are important to aquaculture because newly hatched brine shrimp nauplii (larvae) provide a food source for many fish fry (Mozanzadeh et al., 2021). Culture and harvesting of brine shrimp eggs represents another aspect of the aquaculture industry. Nauplii and metanauplii of Artemia, commonly known as brine shrimp, play a crucial role in aquaculture due to their nutritional value and suitability as live feed for many aquatic species, particularly in larval stages (Sorgeloos & Roubach, 2021).
ESR spectroscopy in liquid food and beverages.pptxPRIYANKA PATEL
With increasing population, people need to rely on packaged food stuffs. Packaging of food materials requires the preservation of food. There are various methods for the treatment of food to preserve them and irradiation treatment of food is one of them. It is the most common and the most harmless method for the food preservation as it does not alter the necessary micronutrients of food materials. Although irradiated food doesn’t cause any harm to the human health but still the quality assessment of food is required to provide consumers with necessary information about the food. ESR spectroscopy is the most sophisticated way to investigate the quality of the food and the free radicals induced during the processing of the food. ESR spin trapping technique is useful for the detection of highly unstable radicals in the food. The antioxidant capability of liquid food and beverages in mainly performed by spin trapping technique.
Deep Behavioral Phenotyping in Systems Neuroscience for Functional Atlasing a...Ana Luísa Pinho
Functional Magnetic Resonance Imaging (fMRI) provides means to characterize brain activations in response to behavior. However, cognitive neuroscience has been limited to group-level effects referring to the performance of specific tasks. To obtain the functional profile of elementary cognitive mechanisms, the combination of brain responses to many tasks is required. Yet, to date, both structural atlases and parcellation-based activations do not fully account for cognitive function and still present several limitations. Further, they do not adapt overall to individual characteristics. In this talk, I will give an account of deep-behavioral phenotyping strategies, namely data-driven methods in large task-fMRI datasets, to optimize functional brain-data collection and improve inference of effects-of-interest related to mental processes. Key to this approach is the employment of fast multi-functional paradigms rich on features that can be well parametrized and, consequently, facilitate the creation of psycho-physiological constructs to be modelled with imaging data. Particular emphasis will be given to music stimuli when studying high-order cognitive mechanisms, due to their ecological nature and quality to enable complex behavior compounded by discrete entities. I will also discuss how deep-behavioral phenotyping and individualized models applied to neuroimaging data can better account for the subject-specific organization of domain-general cognitive systems in the human brain. Finally, the accumulation of functional brain signatures brings the possibility to clarify relationships among tasks and create a univocal link between brain systems and mental functions through: (1) the development of ontologies proposing an organization of cognitive processes; and (2) brain-network taxonomies describing functional specialization. To this end, tools to improve commensurability in cognitive science are necessary, such as public repositories, ontology-based platforms and automated meta-analysis tools. I will thus discuss some brain-atlasing resources currently under development, and their applicability in cognitive as well as clinical neuroscience.
8. a4everyone.com 8
DATA SCIENCE
Data science, also known as data-driven
science, is an interdisciplinary field of
scientific methods, processes, algorithms
and systems to extract knowledge or
insights from data in various forms, either
structured or unstructured, similar to data
mining.