Overview of the current state of the arts of semantic technology and future trends
Linked Open Data + Context-aware Services = Killer Apps of Semantic Technology
Xapi enabled mobile health system with context-awareness & recommendation eng...Jessie Chuang
1. XAPI is a very effective tool in enabling Apps to serve humanity ASAP, because it connects heterogeneous data immediately.
2. XAPI is about people working together. xAPI projects are really across domains collaboration.
3. XAPI is about connecting current technologies, instead of re-inventing wheels.(API’s power)
App day 2014 - App drivers, The changing shape of advertising within the app...Fjord
Daniel Freeman, Service Design Director at Fjord Stockholm, presented at App Day 2014 in Copenhagen in January, on how the shape of advertising is changing within the app world.
F. Petroni, L. Querzoni, R. Beraldi, M. Paolucci:
"LCBM: Statistics-Based Parallel Collaborative Filtering."
In: Proceedings of the 17th International Conference on Business Information Systems (BIS), 2014.
Abstract: "In the last ten years, recommendation systems evolved from novelties to powerful business tools, deeply changing the internet industry. Collaborative Filtering (CF) represents today’s a widely adopted strategy to build recommendation engines. The most advanced CF techniques (i.e. those based on matrix factorization) provide high quality results, but may incur prohibitive computational costs when applied to very large data sets. In this paper we present Linear Classifier of Beta distributions Means (LCBM), a novel collaborative filtering algorithm for binary ratings that is (i) inherently parallelizable and (ii) provides results whose quality is on-par with state-of-the-art solutions (iii) at a fraction of the computational cost."
Overview of the current state of the arts of semantic technology and future trends
Linked Open Data + Context-aware Services = Killer Apps of Semantic Technology
Xapi enabled mobile health system with context-awareness & recommendation eng...Jessie Chuang
1. XAPI is a very effective tool in enabling Apps to serve humanity ASAP, because it connects heterogeneous data immediately.
2. XAPI is about people working together. xAPI projects are really across domains collaboration.
3. XAPI is about connecting current technologies, instead of re-inventing wheels.(API’s power)
App day 2014 - App drivers, The changing shape of advertising within the app...Fjord
Daniel Freeman, Service Design Director at Fjord Stockholm, presented at App Day 2014 in Copenhagen in January, on how the shape of advertising is changing within the app world.
F. Petroni, L. Querzoni, R. Beraldi, M. Paolucci:
"LCBM: Statistics-Based Parallel Collaborative Filtering."
In: Proceedings of the 17th International Conference on Business Information Systems (BIS), 2014.
Abstract: "In the last ten years, recommendation systems evolved from novelties to powerful business tools, deeply changing the internet industry. Collaborative Filtering (CF) represents today’s a widely adopted strategy to build recommendation engines. The most advanced CF techniques (i.e. those based on matrix factorization) provide high quality results, but may incur prohibitive computational costs when applied to very large data sets. In this paper we present Linear Classifier of Beta distributions Means (LCBM), a novel collaborative filtering algorithm for binary ratings that is (i) inherently parallelizable and (ii) provides results whose quality is on-par with state-of-the-art solutions (iii) at a fraction of the computational cost."
ISI 2024: Application Form (Extended), Exam Date (Out), EligibilitySciAstra
The Indian Statistical Institute (ISI) has extended its application deadline for 2024 admissions to April 2. Known for its excellence in statistics and related fields, ISI offers a range of programs from Bachelor's to Junior Research Fellowships. The admission test is scheduled for May 12, 2024. Eligibility varies by program, generally requiring a background in Mathematics and English for undergraduate courses and specific degrees for postgraduate and research positions. Application fees are ₹1500 for male general category applicants and ₹1000 for females. Applications are open to Indian and OCI candidates.
Observation of Io’s Resurfacing via Plume Deposition Using Ground-based Adapt...Sérgio Sacani
Since volcanic activity was first discovered on Io from Voyager images in 1979, changes
on Io’s surface have been monitored from both spacecraft and ground-based telescopes.
Here, we present the highest spatial resolution images of Io ever obtained from a groundbased telescope. These images, acquired by the SHARK-VIS instrument on the Large
Binocular Telescope, show evidence of a major resurfacing event on Io’s trailing hemisphere. When compared to the most recent spacecraft images, the SHARK-VIS images
show that a plume deposit from a powerful eruption at Pillan Patera has covered part
of the long-lived Pele plume deposit. Although this type of resurfacing event may be common on Io, few have been detected due to the rarity of spacecraft visits and the previously low spatial resolution available from Earth-based telescopes. The SHARK-VIS instrument ushers in a new era of high resolution imaging of Io’s surface using adaptive
optics at visible wavelengths.
Deep Behavioral Phenotyping in Systems Neuroscience for Functional Atlasing a...Ana Luísa Pinho
Functional Magnetic Resonance Imaging (fMRI) provides means to characterize brain activations in response to behavior. However, cognitive neuroscience has been limited to group-level effects referring to the performance of specific tasks. To obtain the functional profile of elementary cognitive mechanisms, the combination of brain responses to many tasks is required. Yet, to date, both structural atlases and parcellation-based activations do not fully account for cognitive function and still present several limitations. Further, they do not adapt overall to individual characteristics. In this talk, I will give an account of deep-behavioral phenotyping strategies, namely data-driven methods in large task-fMRI datasets, to optimize functional brain-data collection and improve inference of effects-of-interest related to mental processes. Key to this approach is the employment of fast multi-functional paradigms rich on features that can be well parametrized and, consequently, facilitate the creation of psycho-physiological constructs to be modelled with imaging data. Particular emphasis will be given to music stimuli when studying high-order cognitive mechanisms, due to their ecological nature and quality to enable complex behavior compounded by discrete entities. I will also discuss how deep-behavioral phenotyping and individualized models applied to neuroimaging data can better account for the subject-specific organization of domain-general cognitive systems in the human brain. Finally, the accumulation of functional brain signatures brings the possibility to clarify relationships among tasks and create a univocal link between brain systems and mental functions through: (1) the development of ontologies proposing an organization of cognitive processes; and (2) brain-network taxonomies describing functional specialization. To this end, tools to improve commensurability in cognitive science are necessary, such as public repositories, ontology-based platforms and automated meta-analysis tools. I will thus discuss some brain-atlasing resources currently under development, and their applicability in cognitive as well as clinical neuroscience.
The ability to recreate computational results with minimal effort and actionable metrics provides a solid foundation for scientific research and software development. When people can replicate an analysis at the touch of a button using open-source software, open data, and methods to assess and compare proposals, it significantly eases verification of results, engagement with a diverse range of contributors, and progress. However, we have yet to fully achieve this; there are still many sociotechnical frictions.
Inspired by David Donoho's vision, this talk aims to revisit the three crucial pillars of frictionless reproducibility (data sharing, code sharing, and competitive challenges) with the perspective of deep software variability.
Our observation is that multiple layers — hardware, operating systems, third-party libraries, software versions, input data, compile-time options, and parameters — are subject to variability that exacerbates frictions but is also essential for achieving robust, generalizable results and fostering innovation. I will first review the literature, providing evidence of how the complex variability interactions across these layers affect qualitative and quantitative software properties, thereby complicating the reproduction and replication of scientific studies in various fields.
I will then present some software engineering and AI techniques that can support the strategic exploration of variability spaces. These include the use of abstractions and models (e.g., feature models), sampling strategies (e.g., uniform, random), cost-effective measurements (e.g., incremental build of software configurations), and dimensionality reduction methods (e.g., transfer learning, feature selection, software debloating).
I will finally argue that deep variability is both the problem and solution of frictionless reproducibility, calling the software science community to develop new methods and tools to manage variability and foster reproducibility in software systems.
Exposé invité Journées Nationales du GDR GPL 2024
Professional air quality monitoring systems provide immediate, on-site data for analysis, compliance, and decision-making.
Monitor common gases, weather parameters, particulates.
Earliest Galaxies in the JADES Origins Field: Luminosity Function and Cosmic ...Sérgio Sacani
We characterize the earliest galaxy population in the JADES Origins Field (JOF), the deepest
imaging field observed with JWST. We make use of the ancillary Hubble optical images (5 filters
spanning 0.4−0.9µm) and novel JWST images with 14 filters spanning 0.8−5µm, including 7 mediumband filters, and reaching total exposure times of up to 46 hours per filter. We combine all our data
at > 2.3µm to construct an ultradeep image, reaching as deep as ≈ 31.4 AB mag in the stack and
30.3-31.0 AB mag (5σ, r = 0.1” circular aperture) in individual filters. We measure photometric
redshifts and use robust selection criteria to identify a sample of eight galaxy candidates at redshifts
z = 11.5 − 15. These objects show compact half-light radii of R1/2 ∼ 50 − 200pc, stellar masses of
M⋆ ∼ 107−108M⊙, and star-formation rates of SFR ∼ 0.1−1 M⊙ yr−1
. Our search finds no candidates
at 15 < z < 20, placing upper limits at these redshifts. We develop a forward modeling approach to
infer the properties of the evolving luminosity function without binning in redshift or luminosity that
marginalizes over the photometric redshift uncertainty of our candidate galaxies and incorporates the
impact of non-detections. We find a z = 12 luminosity function in good agreement with prior results,
and that the luminosity function normalization and UV luminosity density decline by a factor of ∼ 2.5
from z = 12 to z = 14. We discuss the possible implications of our results in the context of theoretical
models for evolution of the dark matter halo mass function.
ESR spectroscopy in liquid food and beverages.pptxPRIYANKA PATEL
With increasing population, people need to rely on packaged food stuffs. Packaging of food materials requires the preservation of food. There are various methods for the treatment of food to preserve them and irradiation treatment of food is one of them. It is the most common and the most harmless method for the food preservation as it does not alter the necessary micronutrients of food materials. Although irradiated food doesn’t cause any harm to the human health but still the quality assessment of food is required to provide consumers with necessary information about the food. ESR spectroscopy is the most sophisticated way to investigate the quality of the food and the free radicals induced during the processing of the food. ESR spin trapping technique is useful for the detection of highly unstable radicals in the food. The antioxidant capability of liquid food and beverages in mainly performed by spin trapping technique.
Phenomics assisted breeding in crop improvementIshaGoswami9
As the population is increasing and will reach about 9 billion upto 2050. Also due to climate change, it is difficult to meet the food requirement of such a large population. Facing the challenges presented by resource shortages, climate
change, and increasing global population, crop yield and quality need to be improved in a sustainable way over the coming decades. Genetic improvement by breeding is the best way to increase crop productivity. With the rapid progression of functional
genomics, an increasing number of crop genomes have been sequenced and dozens of genes influencing key agronomic traits have been identified. However, current genome sequence information has not been adequately exploited for understanding
the complex characteristics of multiple gene, owing to a lack of crop phenotypic data. Efficient, automatic, and accurate technologies and platforms that can capture phenotypic data that can
be linked to genomics information for crop improvement at all growth stages have become as important as genotyping. Thus,
high-throughput phenotyping has become the major bottleneck restricting crop breeding. Plant phenomics has been defined as the high-throughput, accurate acquisition and analysis of multi-dimensional phenotypes
during crop growing stages at the organism level, including the cell, tissue, organ, individual plant, plot, and field levels. With the rapid development of novel sensors, imaging technology,
and analysis methods, numerous infrastructure platforms have been developed for phenotyping.
ANAMOLOUS SECONDARY GROWTH IN DICOT ROOTS.pptxRASHMI M G
Abnormal or anomalous secondary growth in plants. It defines secondary growth as an increase in plant girth due to vascular cambium or cork cambium. Anomalous secondary growth does not follow the normal pattern of a single vascular cambium producing xylem internally and phloem externally.
This presentation explores a brief idea about the structural and functional attributes of nucleotides, the structure and function of genetic materials along with the impact of UV rays and pH upon them.
Comparing Evolved Extractive Text Summary Scores of Bidirectional Encoder Rep...University of Maribor
Slides from:
11th International Conference on Electrical, Electronics and Computer Engineering (IcETRAN), Niš, 3-6 June 2024
Track: Artificial Intelligence
https://www.etran.rs/2024/en/home-english/
2024 State of Marketing Report – by HubspotMarius Sescu
https://www.hubspot.com/state-of-marketing
· Scaling relationships and proving ROI
· Social media is the place for search, sales, and service
· Authentic influencer partnerships fuel brand growth
· The strongest connections happen via call, click, chat, and camera.
· Time saved with AI leads to more creative work
· Seeking: A single source of truth
· TLDR; Get on social, try AI, and align your systems.
· More human marketing, powered by robots
ChatGPT is a revolutionary addition to the world since its introduction in 2022. A big shift in the sector of information gathering and processing happened because of this chatbot. What is the story of ChatGPT? How is the bot responding to prompts and generating contents? Swipe through these slides prepared by Expeed Software, a web development company regarding the development and technical intricacies of ChatGPT!
ISI 2024: Application Form (Extended), Exam Date (Out), EligibilitySciAstra
The Indian Statistical Institute (ISI) has extended its application deadline for 2024 admissions to April 2. Known for its excellence in statistics and related fields, ISI offers a range of programs from Bachelor's to Junior Research Fellowships. The admission test is scheduled for May 12, 2024. Eligibility varies by program, generally requiring a background in Mathematics and English for undergraduate courses and specific degrees for postgraduate and research positions. Application fees are ₹1500 for male general category applicants and ₹1000 for females. Applications are open to Indian and OCI candidates.
Observation of Io’s Resurfacing via Plume Deposition Using Ground-based Adapt...Sérgio Sacani
Since volcanic activity was first discovered on Io from Voyager images in 1979, changes
on Io’s surface have been monitored from both spacecraft and ground-based telescopes.
Here, we present the highest spatial resolution images of Io ever obtained from a groundbased telescope. These images, acquired by the SHARK-VIS instrument on the Large
Binocular Telescope, show evidence of a major resurfacing event on Io’s trailing hemisphere. When compared to the most recent spacecraft images, the SHARK-VIS images
show that a plume deposit from a powerful eruption at Pillan Patera has covered part
of the long-lived Pele plume deposit. Although this type of resurfacing event may be common on Io, few have been detected due to the rarity of spacecraft visits and the previously low spatial resolution available from Earth-based telescopes. The SHARK-VIS instrument ushers in a new era of high resolution imaging of Io’s surface using adaptive
optics at visible wavelengths.
Deep Behavioral Phenotyping in Systems Neuroscience for Functional Atlasing a...Ana Luísa Pinho
Functional Magnetic Resonance Imaging (fMRI) provides means to characterize brain activations in response to behavior. However, cognitive neuroscience has been limited to group-level effects referring to the performance of specific tasks. To obtain the functional profile of elementary cognitive mechanisms, the combination of brain responses to many tasks is required. Yet, to date, both structural atlases and parcellation-based activations do not fully account for cognitive function and still present several limitations. Further, they do not adapt overall to individual characteristics. In this talk, I will give an account of deep-behavioral phenotyping strategies, namely data-driven methods in large task-fMRI datasets, to optimize functional brain-data collection and improve inference of effects-of-interest related to mental processes. Key to this approach is the employment of fast multi-functional paradigms rich on features that can be well parametrized and, consequently, facilitate the creation of psycho-physiological constructs to be modelled with imaging data. Particular emphasis will be given to music stimuli when studying high-order cognitive mechanisms, due to their ecological nature and quality to enable complex behavior compounded by discrete entities. I will also discuss how deep-behavioral phenotyping and individualized models applied to neuroimaging data can better account for the subject-specific organization of domain-general cognitive systems in the human brain. Finally, the accumulation of functional brain signatures brings the possibility to clarify relationships among tasks and create a univocal link between brain systems and mental functions through: (1) the development of ontologies proposing an organization of cognitive processes; and (2) brain-network taxonomies describing functional specialization. To this end, tools to improve commensurability in cognitive science are necessary, such as public repositories, ontology-based platforms and automated meta-analysis tools. I will thus discuss some brain-atlasing resources currently under development, and their applicability in cognitive as well as clinical neuroscience.
The ability to recreate computational results with minimal effort and actionable metrics provides a solid foundation for scientific research and software development. When people can replicate an analysis at the touch of a button using open-source software, open data, and methods to assess and compare proposals, it significantly eases verification of results, engagement with a diverse range of contributors, and progress. However, we have yet to fully achieve this; there are still many sociotechnical frictions.
Inspired by David Donoho's vision, this talk aims to revisit the three crucial pillars of frictionless reproducibility (data sharing, code sharing, and competitive challenges) with the perspective of deep software variability.
Our observation is that multiple layers — hardware, operating systems, third-party libraries, software versions, input data, compile-time options, and parameters — are subject to variability that exacerbates frictions but is also essential for achieving robust, generalizable results and fostering innovation. I will first review the literature, providing evidence of how the complex variability interactions across these layers affect qualitative and quantitative software properties, thereby complicating the reproduction and replication of scientific studies in various fields.
I will then present some software engineering and AI techniques that can support the strategic exploration of variability spaces. These include the use of abstractions and models (e.g., feature models), sampling strategies (e.g., uniform, random), cost-effective measurements (e.g., incremental build of software configurations), and dimensionality reduction methods (e.g., transfer learning, feature selection, software debloating).
I will finally argue that deep variability is both the problem and solution of frictionless reproducibility, calling the software science community to develop new methods and tools to manage variability and foster reproducibility in software systems.
Exposé invité Journées Nationales du GDR GPL 2024
Professional air quality monitoring systems provide immediate, on-site data for analysis, compliance, and decision-making.
Monitor common gases, weather parameters, particulates.
Earliest Galaxies in the JADES Origins Field: Luminosity Function and Cosmic ...Sérgio Sacani
We characterize the earliest galaxy population in the JADES Origins Field (JOF), the deepest
imaging field observed with JWST. We make use of the ancillary Hubble optical images (5 filters
spanning 0.4−0.9µm) and novel JWST images with 14 filters spanning 0.8−5µm, including 7 mediumband filters, and reaching total exposure times of up to 46 hours per filter. We combine all our data
at > 2.3µm to construct an ultradeep image, reaching as deep as ≈ 31.4 AB mag in the stack and
30.3-31.0 AB mag (5σ, r = 0.1” circular aperture) in individual filters. We measure photometric
redshifts and use robust selection criteria to identify a sample of eight galaxy candidates at redshifts
z = 11.5 − 15. These objects show compact half-light radii of R1/2 ∼ 50 − 200pc, stellar masses of
M⋆ ∼ 107−108M⊙, and star-formation rates of SFR ∼ 0.1−1 M⊙ yr−1
. Our search finds no candidates
at 15 < z < 20, placing upper limits at these redshifts. We develop a forward modeling approach to
infer the properties of the evolving luminosity function without binning in redshift or luminosity that
marginalizes over the photometric redshift uncertainty of our candidate galaxies and incorporates the
impact of non-detections. We find a z = 12 luminosity function in good agreement with prior results,
and that the luminosity function normalization and UV luminosity density decline by a factor of ∼ 2.5
from z = 12 to z = 14. We discuss the possible implications of our results in the context of theoretical
models for evolution of the dark matter halo mass function.
ESR spectroscopy in liquid food and beverages.pptxPRIYANKA PATEL
With increasing population, people need to rely on packaged food stuffs. Packaging of food materials requires the preservation of food. There are various methods for the treatment of food to preserve them and irradiation treatment of food is one of them. It is the most common and the most harmless method for the food preservation as it does not alter the necessary micronutrients of food materials. Although irradiated food doesn’t cause any harm to the human health but still the quality assessment of food is required to provide consumers with necessary information about the food. ESR spectroscopy is the most sophisticated way to investigate the quality of the food and the free radicals induced during the processing of the food. ESR spin trapping technique is useful for the detection of highly unstable radicals in the food. The antioxidant capability of liquid food and beverages in mainly performed by spin trapping technique.
Phenomics assisted breeding in crop improvementIshaGoswami9
As the population is increasing and will reach about 9 billion upto 2050. Also due to climate change, it is difficult to meet the food requirement of such a large population. Facing the challenges presented by resource shortages, climate
change, and increasing global population, crop yield and quality need to be improved in a sustainable way over the coming decades. Genetic improvement by breeding is the best way to increase crop productivity. With the rapid progression of functional
genomics, an increasing number of crop genomes have been sequenced and dozens of genes influencing key agronomic traits have been identified. However, current genome sequence information has not been adequately exploited for understanding
the complex characteristics of multiple gene, owing to a lack of crop phenotypic data. Efficient, automatic, and accurate technologies and platforms that can capture phenotypic data that can
be linked to genomics information for crop improvement at all growth stages have become as important as genotyping. Thus,
high-throughput phenotyping has become the major bottleneck restricting crop breeding. Plant phenomics has been defined as the high-throughput, accurate acquisition and analysis of multi-dimensional phenotypes
during crop growing stages at the organism level, including the cell, tissue, organ, individual plant, plot, and field levels. With the rapid development of novel sensors, imaging technology,
and analysis methods, numerous infrastructure platforms have been developed for phenotyping.
ANAMOLOUS SECONDARY GROWTH IN DICOT ROOTS.pptxRASHMI M G
Abnormal or anomalous secondary growth in plants. It defines secondary growth as an increase in plant girth due to vascular cambium or cork cambium. Anomalous secondary growth does not follow the normal pattern of a single vascular cambium producing xylem internally and phloem externally.
This presentation explores a brief idea about the structural and functional attributes of nucleotides, the structure and function of genetic materials along with the impact of UV rays and pH upon them.
Comparing Evolved Extractive Text Summary Scores of Bidirectional Encoder Rep...University of Maribor
Slides from:
11th International Conference on Electrical, Electronics and Computer Engineering (IcETRAN), Niš, 3-6 June 2024
Track: Artificial Intelligence
https://www.etran.rs/2024/en/home-english/
2024 State of Marketing Report – by HubspotMarius Sescu
https://www.hubspot.com/state-of-marketing
· Scaling relationships and proving ROI
· Social media is the place for search, sales, and service
· Authentic influencer partnerships fuel brand growth
· The strongest connections happen via call, click, chat, and camera.
· Time saved with AI leads to more creative work
· Seeking: A single source of truth
· TLDR; Get on social, try AI, and align your systems.
· More human marketing, powered by robots
ChatGPT is a revolutionary addition to the world since its introduction in 2022. A big shift in the sector of information gathering and processing happened because of this chatbot. What is the story of ChatGPT? How is the bot responding to prompts and generating contents? Swipe through these slides prepared by Expeed Software, a web development company regarding the development and technical intricacies of ChatGPT!
Product Design Trends in 2024 | Teenage EngineeringsPixeldarts
The realm of product design is a constantly changing environment where technology and style intersect. Every year introduces fresh challenges and exciting trends that mold the future of this captivating art form. In this piece, we delve into the significant trends set to influence the look and functionality of product design in the year 2024.
How Race, Age and Gender Shape Attitudes Towards Mental HealthThinkNow
Mental health has been in the news quite a bit lately. Dozens of U.S. states are currently suing Meta for contributing to the youth mental health crisis by inserting addictive features into their products, while the U.S. Surgeon General is touring the nation to bring awareness to the growing epidemic of loneliness and isolation. The country has endured periods of low national morale, such as in the 1970s when high inflation and the energy crisis worsened public sentiment following the Vietnam War. The current mood, however, feels different. Gallup recently reported that national mental health is at an all-time low, with few bright spots to lift spirits.
To better understand how Americans are feeling and their attitudes towards mental health in general, ThinkNow conducted a nationally representative quantitative survey of 1,500 respondents and found some interesting differences among ethnic, age and gender groups.
Technology
For example, 52% agree that technology and social media have a negative impact on mental health, but when broken out by race, 61% of Whites felt technology had a negative effect, and only 48% of Hispanics thought it did.
While technology has helped us keep in touch with friends and family in faraway places, it appears to have degraded our ability to connect in person. Staying connected online is a double-edged sword since the same news feed that brings us pictures of the grandkids and fluffy kittens also feeds us news about the wars in Israel and Ukraine, the dysfunction in Washington, the latest mass shooting and the climate crisis.
Hispanics may have a built-in defense against the isolation technology breeds, owing to their large, multigenerational households, strong social support systems, and tendency to use social media to stay connected with relatives abroad.
Age and Gender
When asked how individuals rate their mental health, men rate it higher than women by 11 percentage points, and Baby Boomers rank it highest at 83%, saying it’s good or excellent vs. 57% of Gen Z saying the same.
Gen Z spends the most amount of time on social media, so the notion that social media negatively affects mental health appears to be correlated. Unfortunately, Gen Z is also the generation that’s least comfortable discussing mental health concerns with healthcare professionals. Only 40% of them state they’re comfortable discussing their issues with a professional compared to 60% of Millennials and 65% of Boomers.
Race Affects Attitudes
As seen in previous research conducted by ThinkNow, Asian Americans lag other groups when it comes to awareness of mental health issues. Twenty-four percent of Asian Americans believe that having a mental health issue is a sign of weakness compared to the 16% average for all groups. Asians are also considerably less likely to be aware of mental health services in their communities (42% vs. 55%) and most likely to seek out information on social media (51% vs. 35%).
AI Trends in Creative Operations 2024 by Artwork Flow.pdfmarketingartwork
This article is all about what AI trends will emerge in the field of creative operations in 2024. All the marketers and brand builders should be aware of these trends for their further use and save themselves some time!
A report by thenetworkone and Kurio.
The contributing experts and agencies are (in an alphabetical order): Sylwia Rytel, Social Media Supervisor, 180heartbeats + JUNG v MATT (PL), Sharlene Jenner, Vice President - Director of Engagement Strategy, Abelson Taylor (USA), Alex Casanovas, Digital Director, Atrevia (ES), Dora Beilin, Senior Social Strategist, Barrett Hoffher (USA), Min Seo, Campaign Director, Brand New Agency (KR), Deshé M. Gully, Associate Strategist, Day One Agency (USA), Francesca Trevisan, Strategist, Different (IT), Trevor Crossman, CX and Digital Transformation Director; Olivia Hussey, Strategic Planner; Simi Srinarula, Social Media Manager, The Hallway (AUS), James Hebbert, Managing Director, Hylink (CN / UK), Mundy Álvarez, Planning Director; Pedro Rojas, Social Media Manager; Pancho González, CCO, Inbrax (CH), Oana Oprea, Head of Digital Planning, Jam Session Agency (RO), Amy Bottrill, Social Account Director, Launch (UK), Gaby Arriaga, Founder, Leonardo1452 (MX), Shantesh S Row, Creative Director, Liwa (UAE), Rajesh Mehta, Chief Strategy Officer; Dhruv Gaur, Digital Planning Lead; Leonie Mergulhao, Account Supervisor - Social Media & PR, Medulla (IN), Aurelija Plioplytė, Head of Digital & Social, Not Perfect (LI), Daiana Khaidargaliyeva, Account Manager, Osaka Labs (UK / USA), Stefanie Söhnchen, Vice President Digital, PIABO Communications (DE), Elisabeth Winiartati, Managing Consultant, Head of Global Integrated Communications; Lydia Aprina, Account Manager, Integrated Marketing and Communications; Nita Prabowo, Account Manager, Integrated Marketing and Communications; Okhi, Web Developer, PNTR Group (ID), Kei Obusan, Insights Director; Daffi Ranandi, Insights Manager, Radarr (SG), Gautam Reghunath, Co-founder & CEO, Talented (IN), Donagh Humphreys, Head of Social and Digital Innovation, THINKHOUSE (IRE), Sarah Yim, Strategy Director, Zulu Alpha Kilo (CA).
Trends In Paid Search: Navigating The Digital Landscape In 2024Search Engine Journal
The search marketing landscape is evolving rapidly with new technologies, and professionals, like you, rely on innovative paid search strategies to meet changing demands.
It’s important that you’re ready to implement new strategies in 2024.
Check this out and learn the top trends in paid search advertising that are expected to gain traction, so you can drive higher ROI more efficiently in 2024.
You’ll learn:
- The latest trends in AI and automation, and what this means for an evolving paid search ecosystem.
- New developments in privacy and data regulation.
- Emerging ad formats that are expected to make an impact next year.
Watch Sreekant Lanka from iQuanti and Irina Klein from OneMain Financial as they dive into the future of paid search and explore the trends, strategies, and technologies that will shape the search marketing landscape.
If you’re looking to assess your paid search strategy and design an industry-aligned plan for 2024, then this webinar is for you.
5 Public speaking tips from TED - Visualized summarySpeakerHub
From their humble beginnings in 1984, TED has grown into the world’s most powerful amplifier for speakers and thought-leaders to share their ideas. They have over 2,400 filmed talks (not including the 30,000+ TEDx videos) freely available online, and have hosted over 17,500 events around the world.
With over one billion views in a year, it’s no wonder that so many speakers are looking to TED for ideas on how to share their message more effectively.
The article “5 Public-Speaking Tips TED Gives Its Speakers”, by Carmine Gallo for Forbes, gives speakers five practical ways to connect with their audience, and effectively share their ideas on stage.
Whether you are gearing up to get on a TED stage yourself, or just want to master the skills that so many of their speakers possess, these tips and quotes from Chris Anderson, the TED Talks Curator, will encourage you to make the most impactful impression on your audience.
See the full article and more summaries like this on SpeakerHub here: https://speakerhub.com/blog/5-presentation-tips-ted-gives-its-speakers
See the original article on Forbes here:
http://www.forbes.com/forbes/welcome/?toURL=http://www.forbes.com/sites/carminegallo/2016/05/06/5-public-speaking-tips-ted-gives-its-speakers/&refURL=&referrer=#5c07a8221d9b
ChatGPT and the Future of Work - Clark Boyd Clark Boyd
Everyone is in agreement that ChatGPT (and other generative AI tools) will shape the future of work. Yet there is little consensus on exactly how, when, and to what extent this technology will change our world.
Businesses that extract maximum value from ChatGPT will use it as a collaborative tool for everything from brainstorming to technical maintenance.
For individuals, now is the time to pinpoint the skills the future professional will need to thrive in the AI age.
Check out this presentation to understand what ChatGPT is, how it will shape the future of work, and how you can prepare to take advantage.
A brief introduction to DataScience with explaining of the concepts, algorithms, machine learning, supervised and unsupervised learning, clustering, statistics, data preprocessing, real-world applications etc.
It's part of a Data Science Corner Campaign where I will be discussing the fundamentals of DataScience, AIML, Statistics etc.
Time Management & Productivity - Best PracticesVit Horky
Here's my presentation on by proven best practices how to manage your work time effectively and how to improve your productivity. It includes practical tips and how to use tools such as Slack, Google Apps, Hubspot, Google Calendar, Gmail and others.
The six step guide to practical project managementMindGenius
The six step guide to practical project management
If you think managing projects is too difficult, think again.
We’ve stripped back project management processes to the
basics – to make it quicker and easier, without sacrificing
the vital ingredients for success.
“If you’re looking for some real-world guidance, then The Six Step Guide to Practical Project Management will help.”
Dr Andrew Makar, Tactical Project Management
2. Current urban journey planners
UMAP’15 V. Codina, J. Mena, L. Oliva 2
Google Maps Moovit Citymapper
3. Transport mode
selection
Lack of preference learning from user's feedback
Only based on explicit transport preferences
Less transfersLess walking
But... limited personalization
UMAP’15 V. Codina, J. Mena, L. Oliva 3
Shorter routes
4. 2) Current context
To personalize suggested journey plans based on:
How: with context-aware recommendation strategies
Research goal
UMAP’15 V. Codina, J. Mena, L. Oliva 4
1) Past user ratings
5. Recommender engine decoupled from planner
Task: context-aware, user's satisfaction scoring
Role of our journey plan recommender
UMAP’15 V. Codina, J. Mena, L. Oliva 5
Journey plan
Target
context
Target
user preferences
Predicted score
UI
6. Journey plans are highly dynamic
Unique routes per user's request
Challenge: new item (early rater) problem
How to recommend items with no ratings at all?
Solution: Content-based Filtering
New item problem solved by exploiting item’s content
Journey plan recommendation domain
UMAP’15 V. Codina, J. Mena, L. Oliva 6
Collaborative Filtering not an option!
7. Idea: “show me more of the same kind of plans I liked”
Score predictions based on feature vector matching
Content-based approach to
journey plan recommendation
UMAP’15 V. Codina, J. Mena, L. Oliva 7
Item Profile
User profile
0.2 1 0.5 0 1
0.5 -0.7 0.2 0.8 -0.8
Journey plan
vector
Score =
User preference
vector
∈
∈
n = # attribute values
Derived via Stochastic Gradient Descent by solving
the regularized least squares problem over the training ratings
8. Originally we had these attributes per leg:
Transport mode (e.g. Walk, Tram, Bus, Taxi, Car, Bike, …)
Time (seconds)
Cost (€)
Physical effort (Kj)
Example:
Journey plan original data
UMAP’15 V. Codina, J. Mena, L. Oliva 8
Car Tram Walk
600s
3€
0kj
900s
1€
0kj
300s
0€
50kj
Journey plan Leg 1 Leg 2 Leg 3
9. Numeric attributes discretized into 5 equal intervals
E.g. Walk_Effort: [very low | low | medium| high | very high]
Discretization by soft cuts using a fuzzy-set approach
Journey plan feature vector representation
UMAP’15 V. Codina, J. Mena, L. Oliva 9
medium =0.5 high=0.5low= 0
150Kj
Interval values defined by mobility experts
225Kj 275Kj 425Kj
X = 350Kj
10. IDEA: user’s preferences can depend on context
Context matters for journey plan scoring
UMAP’15 V. Codina, J. Mena, L. Oliva 10
11. Context as additional dimension for prediction
3 main paradigms depending on how context is used
Classification of context-aware strategies
UMAP’15 V. Codina, J. Mena, L. Oliva 11
target
context
Prediction
model
in-context
training ratings
target
Item
Pre-filtering Post-filteringContextual modeling
Predicted score
12. Reduction-based approach that builds local user vectors
Using the ratings identified as relevant for the target context
The pre-filtering strategy:
Distributional Semantic Pre-Filtering (DSPF)
UMAP’15 V. Codina, J. Mena, L. Oliva 12
local
ratings
Ratings
filtering
Local user
modeling
≈
≠ Similarities based on the distributional
semantics of contextual conditions
in-context
training ratings
target context
local
user vectors
13. Method inspired on time-aware Matrix Factorization1
Content-Based linear model extended with:
Global contextual biases
e.g. “users tend to rate lower routes with long walks when hot”
Preference-specific contextual biases
e.g. “John rates higher bike routes when sunny”
The contextual modeling strategy:
Distributional Semantic Contextual Modeling (DSCM)
UMAP’15 V. Codina, J. Mena, L. Oliva 13
Distributional similarities
between conditions
[1] Y. Koren “Collaborative filtering with temporal dynamics” Commun. ACM, 2010.
14. Question: which context-aware strategy is better for
journey plan recommendation?
Two types of evaluation
Experiment 1: traditional offline evaluation
Experiment 2: user-centric online evaluation
Experimental Evaluation
UMAP’15 V. Codina, J. Mena, L. Oliva 14
Pre-filtering (DSPF)
Vs.
Contextual modeling (DSCM)
15. Offline evaluation
In-context journey plan rating acquisition
UMAP’15 V. Codina, J. Mena, L. Oliva 15
Suggested journey plans
Journey plan requests
Context settings panel:
• 2 user factors
• 8 urban factors
• ? = Unknown value
16. Contextually-tagged journey plan rating dataset in the
city of Barcelona (Spain)
Per-user data splitting
Error-based performance metric: RMSE
Offline evaluation: procedure
UMAP’15 V. Codina, J. Mena, L. Oliva 16
Users 68
Journey plans 1,628
Attribute values 69
Contextual Factors 10
Conditions 38
Ratings 3,256
17. Both context-aware strategies are better than baseline (Free)
But… differences between context-aware models are not significant (p-value = 0.25)
Offline evaluation: results
UMAP’15 V. Codina, J. Mena, L. Oliva 17
18. Participants used Android app under real contexts
Developed by several partners of the SUPERHUB project
User-centric online evaluation
UMAP’15 V. Codina, J. Mena, L. Oliva 18
Journey plan
request
User context
Top-4
recommendation
Rating
(used for model
fine-tuning)
19. A/B testing method and user study questions
User-centric online evaluation: setup
UMAP’15 V. Codina, J. Mena, L. Oliva 19
4 aspects evaluated:
• Top-n accuracy
• Ranking accuracy
• Context-awareness
• Overall satisfaction
Citizens of Barcelona
(Spain)
22. UMAP’15 V. Codina, J. Mena, L. Oliva 22
Context-Aware User Modeling Strategies
for Journey Plan Recommendation
QUESTIONS
Victor Codina
vcodina@bdigital.org
Editor's Notes
I'm going to present our paper ...
This is a joint work with Luis Oliva from Technical University of Catalonia and Jose Mena from Barcelona Digital tech center. My name is ...
Nowadays there are several free mobile apps that assist you during urban journey planning, helping you to find the shortest routes between two arbitrary points in a city by combining different transport modes, private and public ones.
Here I'm showing some popular examples that most of you probably know: Google Maps, Moovit, CityMapper.
A common strong point of theses apps is that they provide to users with real-time information about transit conditions and transport services.
However, a common limitation of these systems is their lack of user preference learning, which means they are not exploiting previous user interactions with the system to learn about their preferences and adapt journey plan suggestions to them
Commonly these apps allow users to personalize the generated routes by explicitly specifying a set of generic transport preferences for the current request.
These are for example the kind of options that one can choose using Moovit. I prefer routes with less walking. I want shorter routes. Or I want routes with less tranport mode interchanges.
This limitation of current journey planners is what motivated this work, whose research goal was to develop a journey planner able to generate journey plans recommendations based on user's feedback in the form of ratings, as shown in this screenshot, and adapted to the current context.
To accomplish this goal we decided to employ SoA context-aware user modeling and recommendation strategies, given their more than proved effectiveness in other domains, such Point Of Interest recommendation or movie and music domains.
Differently from other domains in our application the recommender engine is not the central component of the system. Instead, the recommender works as an independent module that is used by the journey planner to rank the set of generated candidate routes.
So, in our system the recommendation task was formulated as a journey plan scoring problem, or more generally as a rating prediction problem. So, given a candidate journey plan and target user and context, the recommender taks consisted of predicting the score given by the user to that plan under such contextual conditions.
A particularity of our recommendation domain was that journey plans, the items to recommend, where highly dynamic, in the sense that they were generated based on arbitrary start and destination points, which means they were mostly unique per request.
This implied that our recommender should be able to make predictions in new item conditions, where the target item to recommend has no ratings of other users associated.
So for this reason CF methods are not feasible in this particular task since they require ratings of different users on the same items to extract meaningful patterns from the data.
In such conditions only CB or Knowledge based strategies are feasible because they are able to exploit item's metadata in order to build user models and make recommendations.
So, in this work we used a content-based method as the basis of the context-aware strategies.
The main assumption of CB approaches is that users tend to like items with similar attributes to those he or she already liked in the past.
More formally, these approaches compute the suitability of an item for a user based on feature matching calculation.
To do such comparison it is necessary to encode user preferences and item descriptions in the same attribute space, commonly using a feature vector representation.
In the case of the item vector, positive values represent the set of attribute values that are relevant in the description of the item, and zero values the no presence of the attribute at all.
In the case of user vectors, values represents the user's interests, which can be positive, negative or neutral .
Once we have this feature representation of the target item and user, predictions can be generated by comparing the values of the two vectors. In this work we used the dot product, since it is one of the most popular methods.
User vectors can be derived from the past user rating by using several methods. Here we calculate them by solving the least squares problem using SGD.
So one can see that in CB recommendation the item vector construction is a very important step. Now I'm going to explain how we built the vectors from the original journey plan data.
To do so, we extracted several categorical attributes from the numeric data provided by the journey planner about each candidate journey plan, which included the following information:
- information about the time required to complete each leg composing the plan t
- information about the cost associated to each mode of transport mode used in the plan (in €)
– the physical effort required (if the plans include legs by foot or bike)
From these data we derived several discrete attributes describing the time and cost associated to each mode of transport separately, which we group by walk, bike, public and private and also global features like the total number of public interchanges. Each of these attributes was discretized into 5 equal intervals: from very low to very high values. For the value assignation we used a fuzzy-set method in order to accurately classify values close to the boundaries. The result of this process is weighted feature vector with values between [0,1] indicating the weight of the attribute representing the journey plan.
So far I have talked about how we generate context-free predictions, and now I’m going present the strategies we experimented with to incorporate context into the prediciton.
The main assumption of CARS is that user preferences can depend on context. For example…
CARS are commonly classified in three main paradigms depending on how they incorporate context: pre-filtering, post-filtering and contextual modeling. Pre-filtering methods use context only for selecting the rating data which is relevant to the target context, so they can be used in combination with any context-free recommendation method. This is also the case of post-filtering methods, but here context is exploited at the end of the process, to adjust context-free recommendations rather than data filtering. Finally, contextual modeling strategies are those approaches where context is incorporated into the estimation function as additional model parameters, giving rise to truly multi-dimensional prediction models
Although there are several context-aware recommendation approaches out there, in our evaluatio with only experimented with two state-of-the-art methods, one based on pre-filtering and another one based on contextual modeling. We selected them because of their superior rating prediction accuracy and scalability in domains with high context granularity (where contextual situations are defined by the conjunction of several contextual conditions) as in this case.
Now I will explain the main idea of each of the implemented methods:
The pre-filtering method consists of an adaptation of DSPF, a sophisticated reduction-based approach that builds a local prediction model for each target contextual situation based on the ratings identified as relevant for the given context. It is called distributional semantic because this method defines two context as similar if their composing condition have similar distributional semantics, in the sense of how they influence the user ratings.
This example illustrates how this strategy works:
1- Let's assume that our set of ratings is composed of ratings acquired in three different contexts, ratings provided while travelling with family, when sunny and in rainy conditions. Assuming that that the target context is sunny, and that family and sunny conditions have similar rating influencing patterns, the set of relevant ratings for that target context would be the ratings tagged with family and sunny.
2. Then, a local recommendation model is learned using the relevant subset of ratings. In particular, using our CB approach, a local user vector of preferences is learnt for each user, which will be used a posteriori to do the item-user matching in the given target context.
This approach consists of extending a linear CB prediction model by introducing global and user-specific parameters which model the influence of context to the user's preferences.
hat model the global effects of contextual conditions w.r.t. journey plan features
captures the global context variability of user preferences with respect to the item attributes. So for example, one of these parameters could represent that in hot conditions users tend to rate positively routes with short walking distances.
that model the context variability of user preferences
Our goal was to understand which stratregy is better in this particular recommendation task, so our evaluation consisted of a performance comparison between these two strategies: the pre-filtering and the contextual modeling strategy.
We did this comparison by means of two experiments: 1 using the traditional offline setting and 1 user-centric based on a A B testing method.
Once requested, the planner generated a set of alternative joruney plans using different tranport mode combinations
By clicking on each of the alternatives, users could see the plan details and vote the route under the active context .
In order to evaluate the strategies from a system-centric perspective we first needed a way to collect a set of in-context ratings for a variety of journey plans and contextual situations
To do so, we developed an ad-hoc web-based app in which users could request journey plans under imaginary contexts and rate them.
68 users participated in this experiment, which lasted one week. We collected more than 3000 ratings. As you can see the number of ratings collected doubles the number of rated journey plans. This is because, users were asked to rate the same plan in two different contextual situations. Next, I will explain this acquisition method in more detail.
We split the collected data set on training and test sets by using an all-but-n protocol, in which a portion of the ratings of each users are held for testing
and we measure the performance of the models in terms of their RMSE.
After one week, we collected around 3000 ratings and each user provided on average 50 ratings .
We split this data set on training and test in order to train the evaluated prediction models and measure their RMSE.
This bar char shows the results. DSPF corresponds to the results of the pre-filtering strategy, DSCM to the contextual modeling strategy and Free refers to the context-free method using a linear CB model.
As you can see both context-aware strategies clearly outperform the context-free baseline (the contextual modeling reduces RMSE by 14% and the prefiltering by 9%)
Comparing the context-aware strategies we can observe that the contextual modeling seems to have better performance although we cannot state that these differences are statistically significant. So this offline evaluation was not useful to identify a clear winner.
For this experiment users used the mobile app developed in the SUPERHUB project. Here you can see some screenshots of the UI trough which users could receive recommendations and rate them.
As I said previously, we also compared the performance of the proposed context-aware strategies using A/B testing. For this user-centric experiment we used a different set of users but of similar size, particularly 67 users completed the experiment.
This diagram shows the design of our experiment, in which users were randomly split in three groups of similar size, one group receiving recommendations from the pre-filtering strategy, another one using the contextual modeling and the other one using the context-free algorithm. Obviously users didn't know to which group were assigned to.
TODO: Improve this
Once completed the task, users answered 4 different questions on a five-point scale. Each of the questions was aimed to evaluate a different aspect of the recommendations: top-n accuracy, ranking accuracy, context-awareness, and the overall satisfaction with the system.
These bar charts show the mean score given by each group of users to the 4 questions. One can easily observe that the contextual modeling strategy, DSCM, is the one with higher perceived recommendation accuracy and context-awareness, which leads to a higher user's satisfaction according to the correlation analysis. And this case we found that the differences with respect the prefiltering strategy were statistically significant, validating thus its superior performance in this application.