Pavan Kapanipathi's talk at IBM's Frontiers of Cloud Computing and Big Data Workshop 2014. http://researcher.ibm.com/researcher/view_group_subpage.php?id=5565
Due to the increased adoption of social web, users, specifically Twitter users are facing information overload. Unless a user is willing to restrict the sources (eg number of followings), important information relevant to users' interests often go unnoticed. The reasons include (1) the postings may be at a time the user is not looking for; (2) the user unaware and hence not following the information source; (3) and the information arrives at a rate at which the user cannot consume. Furthermore, some information that are temporally relevant, discovered late might be of no use.
My research addresses these challenges by
(1) Generating user profiles of interests from Twitter using Wikipedia. The interests gleaned from users' Twitter data can be leveraged by personalization and recommendation systems in order to reduce information overload/Volume for users.
(2) Filtering twitter data relevant to dynamically evolving entities. Including Volume, this addresses the velocity challenge in delivering relevant information in real-time. The approach is deployed on Twitris to crawl for dynamic event-relevant tweets for analysis. The prominent aspect of the approaches is the use of crowd-sourced knowledge-base such as Wikipedia.
Taking it Public: Visualizing Geospatial Data on the Web Using Shinynacis_slides
NACIS 2016 Presentation
Jerry Shannon, University of Georgia
Kyle Walker, Texas Christian University
Julia Connell, University of Georgia
Governmental and non-profit institutions have increasingly created data dashboards based on open datasets to increase transparency and encourage citizen participation. Two limitations have hampered these efforts. First, raw datasets are often complex and difficult to decipher for non-specialists. Second, software to visualize trends within the data is expensive. For several of these systems, tools specifically for geovisualization are underdeveloped. In this presentation, we describe how Shiny, a data visualization system developed by RStudio, provides solutions to both issues. Shiny harnesses a variety of existing tools such as Leaflet, Plotly, and Highcharts, and encourages users to interact and explore datasets. As it runs on the free and open source R software, Shiny's cost is also minimal. We use two case studies to describe how Shiny provides an accessible way to facilitate data exploration for public audiences.
Noshir Contractor's view on the future of Linked DataCarlos Pedrinaci
This document discusses adding social network data and analysis to linked open data. It suggests that while linked data is growing, killer applications that use linked data may require integrating social network information as well. It presents an initial test bed for integrating semantic web data that includes social network analysis and text mining to analyze variables across data sources and make recommendations of workflows, datasets, documents, and people. The integration of social data and analysis with linked open data could provide larger, higher resolution longitudinal data and enable reasoning and inferences over the combined data sources.
CSIA 360 PAPER #1 CAN WE ENSURE THAT OPEN DATA IS USEFUL AND SECURE? (UMUC)ViscolKanady
The document provides a scenario for a cybersecurity consulting firm tasked with writing a white paper for a federal agency on ensuring open data is both useful and secure. The agency currently distributes valuable datasets through paid DVDs but plans to provide open data through a government website, raising security concerns from customers. The white paper should address: 1) An introduction to open data policies and benefits; 2) Examples of how open data is used; 3) Security issues regarding confidentiality, integrity, availability, authenticity and non-repudiation and current mitigations; and 4) Best practices for ensuring security of open data. Research is provided on relevant policies, uses of open data, security issues and potential solutions, and best practices to
Privacy impact assessments (PIAs) are required by the E-Government Act of 2002 to evaluate privacy risks in federal IT systems and are submitted annually to the Office of Management and Budget. The document discusses researching how PIAs are used and the value they provide. A two to three page white paper must be written analyzing whether PIAs are useful policy tools by providing privacy protections and informing stakeholders like advocates, lawmakers, and policymakers. Research on privacy laws, PIA contents and use, and best practices for IT privacy protections must be included.
CSIA 360 CASE STUDY #3 IS THERE A CYBERSECURITY WORKFORCE CRISIS IN STATE GOV...ViscolKanady
There is a cybersecurity workforce crisis in many state governments due to several factors. States face difficulties hiring qualified cybersecurity workers because of competition from higher-paying private sector jobs and budget constraints that limit state salary offers. Additionally, states have trouble retaining cybersecurity staff due to work environments with outdated technology and few opportunities for career growth. However, others argue there is no unique crisis for states and all employers face a global shortage of cybersecurity skills. To alleviate shortages, states should offer competitive salaries, implement cybersecurity training programs, and use alternative recruitment strategies that emphasize quality of life over compensation.
CSIA 360 CASE STUDY #1 ARE PRIVACY IMPACT ASSESSMENTS (PIA) USEFUL AS A POLIC...ViscolKanady
Privacy impact assessments (PIA) are required by the E-Government Act of 2002 for federal agencies to submit to the Office of Management and Budget annually. The assessments evaluate IT systems and how personally identifiable information is collected, stored, shared and managed. The document discusses researching how PIAs are used and whether they provide useful information to privacy advocates, lawmakers and others involved in policymaking. A two to three page white paper will be written analyzing PIAs and their usefulness as a policy tool based on research into laws, best practices and perspectives of different groups.
This document summarizes a study examining how alternative news sources on Twitter may have influenced the social media agenda around the Morgan Harrington case. The study analyzed over 1,000 tweets about the case from the 48 hours after her disappearance was reported. It found support for the hypotheses that tweets about the case were more likely to link to alternative news sources rather than mainstream media, retweet messages from alternative sources more, and retweet messages containing links to alternative content over mainstream content. This suggests alternative sources may have played a role in setting the social media agenda regarding the case.
Pavan Kapanipathi's talk at IBM's Frontiers of Cloud Computing and Big Data Workshop 2014. http://researcher.ibm.com/researcher/view_group_subpage.php?id=5565
Due to the increased adoption of social web, users, specifically Twitter users are facing information overload. Unless a user is willing to restrict the sources (eg number of followings), important information relevant to users' interests often go unnoticed. The reasons include (1) the postings may be at a time the user is not looking for; (2) the user unaware and hence not following the information source; (3) and the information arrives at a rate at which the user cannot consume. Furthermore, some information that are temporally relevant, discovered late might be of no use.
My research addresses these challenges by
(1) Generating user profiles of interests from Twitter using Wikipedia. The interests gleaned from users' Twitter data can be leveraged by personalization and recommendation systems in order to reduce information overload/Volume for users.
(2) Filtering twitter data relevant to dynamically evolving entities. Including Volume, this addresses the velocity challenge in delivering relevant information in real-time. The approach is deployed on Twitris to crawl for dynamic event-relevant tweets for analysis. The prominent aspect of the approaches is the use of crowd-sourced knowledge-base such as Wikipedia.
Taking it Public: Visualizing Geospatial Data on the Web Using Shinynacis_slides
NACIS 2016 Presentation
Jerry Shannon, University of Georgia
Kyle Walker, Texas Christian University
Julia Connell, University of Georgia
Governmental and non-profit institutions have increasingly created data dashboards based on open datasets to increase transparency and encourage citizen participation. Two limitations have hampered these efforts. First, raw datasets are often complex and difficult to decipher for non-specialists. Second, software to visualize trends within the data is expensive. For several of these systems, tools specifically for geovisualization are underdeveloped. In this presentation, we describe how Shiny, a data visualization system developed by RStudio, provides solutions to both issues. Shiny harnesses a variety of existing tools such as Leaflet, Plotly, and Highcharts, and encourages users to interact and explore datasets. As it runs on the free and open source R software, Shiny's cost is also minimal. We use two case studies to describe how Shiny provides an accessible way to facilitate data exploration for public audiences.
Noshir Contractor's view on the future of Linked DataCarlos Pedrinaci
This document discusses adding social network data and analysis to linked open data. It suggests that while linked data is growing, killer applications that use linked data may require integrating social network information as well. It presents an initial test bed for integrating semantic web data that includes social network analysis and text mining to analyze variables across data sources and make recommendations of workflows, datasets, documents, and people. The integration of social data and analysis with linked open data could provide larger, higher resolution longitudinal data and enable reasoning and inferences over the combined data sources.
CSIA 360 PAPER #1 CAN WE ENSURE THAT OPEN DATA IS USEFUL AND SECURE? (UMUC)ViscolKanady
The document provides a scenario for a cybersecurity consulting firm tasked with writing a white paper for a federal agency on ensuring open data is both useful and secure. The agency currently distributes valuable datasets through paid DVDs but plans to provide open data through a government website, raising security concerns from customers. The white paper should address: 1) An introduction to open data policies and benefits; 2) Examples of how open data is used; 3) Security issues regarding confidentiality, integrity, availability, authenticity and non-repudiation and current mitigations; and 4) Best practices for ensuring security of open data. Research is provided on relevant policies, uses of open data, security issues and potential solutions, and best practices to
Privacy impact assessments (PIAs) are required by the E-Government Act of 2002 to evaluate privacy risks in federal IT systems and are submitted annually to the Office of Management and Budget. The document discusses researching how PIAs are used and the value they provide. A two to three page white paper must be written analyzing whether PIAs are useful policy tools by providing privacy protections and informing stakeholders like advocates, lawmakers, and policymakers. Research on privacy laws, PIA contents and use, and best practices for IT privacy protections must be included.
CSIA 360 CASE STUDY #3 IS THERE A CYBERSECURITY WORKFORCE CRISIS IN STATE GOV...ViscolKanady
There is a cybersecurity workforce crisis in many state governments due to several factors. States face difficulties hiring qualified cybersecurity workers because of competition from higher-paying private sector jobs and budget constraints that limit state salary offers. Additionally, states have trouble retaining cybersecurity staff due to work environments with outdated technology and few opportunities for career growth. However, others argue there is no unique crisis for states and all employers face a global shortage of cybersecurity skills. To alleviate shortages, states should offer competitive salaries, implement cybersecurity training programs, and use alternative recruitment strategies that emphasize quality of life over compensation.
CSIA 360 CASE STUDY #1 ARE PRIVACY IMPACT ASSESSMENTS (PIA) USEFUL AS A POLIC...ViscolKanady
Privacy impact assessments (PIA) are required by the E-Government Act of 2002 for federal agencies to submit to the Office of Management and Budget annually. The assessments evaluate IT systems and how personally identifiable information is collected, stored, shared and managed. The document discusses researching how PIAs are used and whether they provide useful information to privacy advocates, lawmakers and others involved in policymaking. A two to three page white paper will be written analyzing PIAs and their usefulness as a policy tool based on research into laws, best practices and perspectives of different groups.
This document summarizes a study examining how alternative news sources on Twitter may have influenced the social media agenda around the Morgan Harrington case. The study analyzed over 1,000 tweets about the case from the 48 hours after her disappearance was reported. It found support for the hypotheses that tweets about the case were more likely to link to alternative news sources rather than mainstream media, retweet messages from alternative sources more, and retweet messages containing links to alternative content over mainstream content. This suggests alternative sources may have played a role in setting the social media agenda regarding the case.
Uw Digital Communications Social Media Is Not SearchMarianne Sweeny
I had the pleasure of speaking to one of the Digital Communication classes at the University of Washington on my favorite topic, why social media will never replace search as an information finding medium. Those students were wicked smart and I walked away learning a lot myself.
This is a presentation that I did for the Enterprise Search Summit West 2008 that has been amended for a Web Project Management class at the University of Washington
Adding Value to Your Custom Research With Online Toolsguest6e3d4d
Secondary research provides existing data that can help understand a situation and answer questions without conducting primary research. It is cheaper and faster than collecting new data, and the data is from larger sample sizes. Secondary research helps frame the marketing problem and guide the development of a primary research study by providing context on consumer and industry trends.
Data Science: 2018 Media & Influencer AnalysisZeno Group
The document analyzes media coverage and influencer activity related to data science over 2018. It finds that major publications like Forbes and TechCrunch drove the most volume of coverage, while topics like data analytics, data science, and predictive analytics resonated most. Key influencers are identified along with their social reach, areas of expertise, and trending topics discussed over time. Their top consumed media provides insights into where to engage them.
STC 2010 Strategies for the Social Web for DocumentationAnne Gentle
The social web can be perceived as intimidating, live-saving, risky, or a black hole of productivity loss. Learn how to take a strategic approach to integrating social media to accomplish your overall documentation goals.
Nowadays social media has become one of the largest gatherings of people in online. There are many ways for the industries to promote their products to the public through advertising. The variety of advertisement is increasing dramatically. Businessmen are so much dependent on the advertisement that significantly it really brought out success in the market and hence practiced by major industries. Thus, companies are trying hard to draw the attention of customers on social networks through online advertisement. One of the most popular social media is Twitter which is popular for short text sharing named ‘Tweet'. People here create their profile with basic information. To ensure the advertisements are shown to relative people, Twitter targets people based on language, gender, interest, follower, device, behaviour, tailored audiences, keyword, and geography targeting. Twitter generates interest sets based on their activities on Twitter. What our framework does is that it determines the topic of interest from a given list of Tweets if it has any. This process is called Entity Intersect Categorizing Value (EICV). Each category topic generates a set of words or phrases related to that topic. An entity set is created from processing tweets by keyword generation and Twitters data using Twitter API. Value of entities is matched with the set of categories. If they cross a threshold value, it results in the category which matched the desired interest category. For smaller amounts of data sizes, the results show that our framework performs with higher accuracy rate.
Nowadays social media has become one of the largest gatherings of people in online. There are many ways for the industries to promote their products to the public through advertising. The variety of advertisement is increasing dramatically. Businessmen are so much dependent on the advertisement that significantly it really brought out success in the market and hence practiced by major industries. Thus, companies are trying hard to draw the attention of customers on social networks through online advertisement. One of the most popular social media is Twitter which is popular for short text sharing named ‘Tweet'. People here create their profile with basic information. To ensure the advertisements are shown to relative people, Twitter targets people based on language, gender, interest, follower, device, behaviour, tailored audiences, keyword, and geography targeting. Twitter generates interest sets based on their activities on Twitter. What our framework does is that it determines the topic of interest from a given list of Tweets if it has any. This process is called Entity Intersect Categorizing Value (EICV). Each category topic generates a set of words or phrases related to that topic. An entity set is created from processing tweets by keyword generation and Twitters data using Twitter API. Value of entities is matched with the set of categories. If they cross a threshold value, it results in the category which matched the desired interest category. For smaller amounts of data sizes, the results show that our framework performs with higher accuracy rate.
DISCOVERING USERS TOPIC OF INTEREST FROM TWEETijcsit
Nowadays social media has become one of the largest gatherings of people in online. There are many ways
for the industries to promote their products to the public through advertising. The variety of advertisement is increasing dramatically. Businessmen are so much dependent on the advertisement that significantly it really brought out success in the market and hence practiced by major industries. Thus, companies are trying hard to draw the attention of customers on social networks through online advertisement. One of the
most popular social media is Twitter which is popular for short text sharing named ‘Tweet'. People here create their profile with basic information. To ensure the advertisements are shown to relative people, Twitter targets people based on language, gender, interest, follower, device, behaviour, tailored audiences,
keyword, and geography targeting. Twitter generates interest sets based on their activities on Twitter. What
our framework does is that it determines the topic of interest from a given list of Tweets if it has any. This
process is called Entity Intersect Categorizing Value (EICV). Each category topic generates a set of words
or phrases related to that topic. An entity set is created from processing tweets by keyword generation and
Twitters data using Twitter API. Value of entities is matched with the set of categories. If they cross a
threshold value, it results in the category which matched the desired interest category. For smaller amounts
of data sizes, the results show that our framework performs with higher accuracy rate.
The Attention Economy is a concept unifying the different approaches to enter the user's mind, from search engines to spreading through the social graph of social networks to recommendations.
Domain Scoping for Subject Matter Experts by Elham Khabiridiannepatricia
Presentation for Cognitive Systems Institute Group Speaker Series call on October 15, 2015. Elham Khabiri is a Researcher at the IBM TJ Watson Research Center.
When to use the different text analytics tools - Meaning CloudMeaningCloud
Classification, topic extraction, clustering... When to use the different Text Analytics tools?
How to leverage Text Analytics technology for your business
MeaningCloud webinar, February 8th, 2017
More information and recording of the webinar https://www.meaningcloud.com/blog/recorded-webinar-use-different-text-analytics-tools
www.meaningcloud.com
This presentation introduces DiscoverText, a text analysis software. It discusses how text analytics is a growing trend for mining and interpreting unstructured data. The presentation outlines a 5-step process for getting started with text analysis and highlights features of DiscoverText such as importing data, tagging text, creating document clusters, and generating reports. DiscoverText allows users to securely analyze data from multiple sources and explore crowdsourcing annotations to enhance analysis.
This document discusses metrics and analytics for social media. It defines metrics as counting and tracking past data, while analytics look at both past and present data strategically. Examples are provided of metrics like employee attrition rates versus analytics that examine the reasons behind those rates. Key frameworks for social media measurement are outlined, like return on investment, reach, likes, and cost of ignoring opportunities. Popular social media analytics tools from Google, Facebook, and Twitter are profiled. The document also discusses network analytics and using tools like NodeXL to map relationships and influence within social networks.
This document discusses metrics and analytics for social media. It defines metrics as counting and tracking past data, while analytics look at both past and present data strategically. Examples are provided of metrics like employee attrition rates versus analytics that examine the reasons behind those rates. Key frameworks for social media measurement are outlined, including types of engagement data from platforms like Facebook, Twitter, and Google Analytics. The document also covers principles of social network analysis and tools for mapping connections between users.
This document discusses metrics and analytics for social media. It defines metrics as counting and tracking past data, while analytics look at both past and present data strategically. Examples are provided of metrics like employee attrition rates versus analytics that examine the reasons behind those rates. Key frameworks for social media measurement are outlined, including types of engagement data from platforms like Facebook, Twitter, and Google Analytics. The document also covers principles of social network analysis for mapping influence through connections on platforms.
The presentation was delivered November 13, 2009 by Marlena Reed and Sharon Goldmacher of Atlanta based marketing firm communications 21 to the National Credit Reporting Association.
The document discusses the evolution of search from early human-mediated models to modern automated systems. It outlines the development of search engines from basic directories to sophisticated algorithms that analyze user behavior and semantic relationships to improve relevance (Search 1.0 to Search 2.0). The document then discusses opportunities for search engine optimization in the modern search landscape, including harnessing social media and mobile usage data.
More Related Content
Similar to Thunderbird School Of Global Management Presentation2
Uw Digital Communications Social Media Is Not SearchMarianne Sweeny
I had the pleasure of speaking to one of the Digital Communication classes at the University of Washington on my favorite topic, why social media will never replace search as an information finding medium. Those students were wicked smart and I walked away learning a lot myself.
This is a presentation that I did for the Enterprise Search Summit West 2008 that has been amended for a Web Project Management class at the University of Washington
Adding Value to Your Custom Research With Online Toolsguest6e3d4d
Secondary research provides existing data that can help understand a situation and answer questions without conducting primary research. It is cheaper and faster than collecting new data, and the data is from larger sample sizes. Secondary research helps frame the marketing problem and guide the development of a primary research study by providing context on consumer and industry trends.
Data Science: 2018 Media & Influencer AnalysisZeno Group
The document analyzes media coverage and influencer activity related to data science over 2018. It finds that major publications like Forbes and TechCrunch drove the most volume of coverage, while topics like data analytics, data science, and predictive analytics resonated most. Key influencers are identified along with their social reach, areas of expertise, and trending topics discussed over time. Their top consumed media provides insights into where to engage them.
STC 2010 Strategies for the Social Web for DocumentationAnne Gentle
The social web can be perceived as intimidating, live-saving, risky, or a black hole of productivity loss. Learn how to take a strategic approach to integrating social media to accomplish your overall documentation goals.
Nowadays social media has become one of the largest gatherings of people in online. There are many ways for the industries to promote their products to the public through advertising. The variety of advertisement is increasing dramatically. Businessmen are so much dependent on the advertisement that significantly it really brought out success in the market and hence practiced by major industries. Thus, companies are trying hard to draw the attention of customers on social networks through online advertisement. One of the most popular social media is Twitter which is popular for short text sharing named ‘Tweet'. People here create their profile with basic information. To ensure the advertisements are shown to relative people, Twitter targets people based on language, gender, interest, follower, device, behaviour, tailored audiences, keyword, and geography targeting. Twitter generates interest sets based on their activities on Twitter. What our framework does is that it determines the topic of interest from a given list of Tweets if it has any. This process is called Entity Intersect Categorizing Value (EICV). Each category topic generates a set of words or phrases related to that topic. An entity set is created from processing tweets by keyword generation and Twitters data using Twitter API. Value of entities is matched with the set of categories. If they cross a threshold value, it results in the category which matched the desired interest category. For smaller amounts of data sizes, the results show that our framework performs with higher accuracy rate.
Nowadays social media has become one of the largest gatherings of people in online. There are many ways for the industries to promote their products to the public through advertising. The variety of advertisement is increasing dramatically. Businessmen are so much dependent on the advertisement that significantly it really brought out success in the market and hence practiced by major industries. Thus, companies are trying hard to draw the attention of customers on social networks through online advertisement. One of the most popular social media is Twitter which is popular for short text sharing named ‘Tweet'. People here create their profile with basic information. To ensure the advertisements are shown to relative people, Twitter targets people based on language, gender, interest, follower, device, behaviour, tailored audiences, keyword, and geography targeting. Twitter generates interest sets based on their activities on Twitter. What our framework does is that it determines the topic of interest from a given list of Tweets if it has any. This process is called Entity Intersect Categorizing Value (EICV). Each category topic generates a set of words or phrases related to that topic. An entity set is created from processing tweets by keyword generation and Twitters data using Twitter API. Value of entities is matched with the set of categories. If they cross a threshold value, it results in the category which matched the desired interest category. For smaller amounts of data sizes, the results show that our framework performs with higher accuracy rate.
DISCOVERING USERS TOPIC OF INTEREST FROM TWEETijcsit
Nowadays social media has become one of the largest gatherings of people in online. There are many ways
for the industries to promote their products to the public through advertising. The variety of advertisement is increasing dramatically. Businessmen are so much dependent on the advertisement that significantly it really brought out success in the market and hence practiced by major industries. Thus, companies are trying hard to draw the attention of customers on social networks through online advertisement. One of the
most popular social media is Twitter which is popular for short text sharing named ‘Tweet'. People here create their profile with basic information. To ensure the advertisements are shown to relative people, Twitter targets people based on language, gender, interest, follower, device, behaviour, tailored audiences,
keyword, and geography targeting. Twitter generates interest sets based on their activities on Twitter. What
our framework does is that it determines the topic of interest from a given list of Tweets if it has any. This
process is called Entity Intersect Categorizing Value (EICV). Each category topic generates a set of words
or phrases related to that topic. An entity set is created from processing tweets by keyword generation and
Twitters data using Twitter API. Value of entities is matched with the set of categories. If they cross a
threshold value, it results in the category which matched the desired interest category. For smaller amounts
of data sizes, the results show that our framework performs with higher accuracy rate.
The Attention Economy is a concept unifying the different approaches to enter the user's mind, from search engines to spreading through the social graph of social networks to recommendations.
Domain Scoping for Subject Matter Experts by Elham Khabiridiannepatricia
Presentation for Cognitive Systems Institute Group Speaker Series call on October 15, 2015. Elham Khabiri is a Researcher at the IBM TJ Watson Research Center.
When to use the different text analytics tools - Meaning CloudMeaningCloud
Classification, topic extraction, clustering... When to use the different Text Analytics tools?
How to leverage Text Analytics technology for your business
MeaningCloud webinar, February 8th, 2017
More information and recording of the webinar https://www.meaningcloud.com/blog/recorded-webinar-use-different-text-analytics-tools
www.meaningcloud.com
This presentation introduces DiscoverText, a text analysis software. It discusses how text analytics is a growing trend for mining and interpreting unstructured data. The presentation outlines a 5-step process for getting started with text analysis and highlights features of DiscoverText such as importing data, tagging text, creating document clusters, and generating reports. DiscoverText allows users to securely analyze data from multiple sources and explore crowdsourcing annotations to enhance analysis.
This document discusses metrics and analytics for social media. It defines metrics as counting and tracking past data, while analytics look at both past and present data strategically. Examples are provided of metrics like employee attrition rates versus analytics that examine the reasons behind those rates. Key frameworks for social media measurement are outlined, like return on investment, reach, likes, and cost of ignoring opportunities. Popular social media analytics tools from Google, Facebook, and Twitter are profiled. The document also discusses network analytics and using tools like NodeXL to map relationships and influence within social networks.
This document discusses metrics and analytics for social media. It defines metrics as counting and tracking past data, while analytics look at both past and present data strategically. Examples are provided of metrics like employee attrition rates versus analytics that examine the reasons behind those rates. Key frameworks for social media measurement are outlined, including types of engagement data from platforms like Facebook, Twitter, and Google Analytics. The document also covers principles of social network analysis and tools for mapping connections between users.
This document discusses metrics and analytics for social media. It defines metrics as counting and tracking past data, while analytics look at both past and present data strategically. Examples are provided of metrics like employee attrition rates versus analytics that examine the reasons behind those rates. Key frameworks for social media measurement are outlined, including types of engagement data from platforms like Facebook, Twitter, and Google Analytics. The document also covers principles of social network analysis for mapping influence through connections on platforms.
The presentation was delivered November 13, 2009 by Marlena Reed and Sharon Goldmacher of Atlanta based marketing firm communications 21 to the National Credit Reporting Association.
The document discusses the evolution of search from early human-mediated models to modern automated systems. It outlines the development of search engines from basic directories to sophisticated algorithms that analyze user behavior and semantic relationships to improve relevance (Search 1.0 to Search 2.0). The document then discusses opportunities for search engine optimization in the modern search landscape, including harnessing social media and mobile usage data.
Similar to Thunderbird School Of Global Management Presentation2 (20)
Thunderbird School Of Global Management Presentation2
1. Social Media : The New Kid on the Block Sheila Donnelly October 27, 2010
2. Social Media? Andreas Kaplan and Michael Haenlein define social media as "a group of Internet-based applications that build on the ideological and technological foundations of Web 2.0, which allows the creation and exchange of user-generated content." From: “Users of the world, unite! The challenges and opportunities of Social Media.” Business Horizons53 (1): 59–68.
3.
4. Social Media Technologies: Microblogging Publishing Photo Sharing Aggregation Audio Video Live-Casting RSS Mobile Crowd sourcing Virtual Worlds Gaming Search Conversation Apps Social Networking
6. Blogs and Google Trends Let’s take two of those technologies and learn how to use them in your business research. Search engines for blogs Google™ Trends
8. Blogs Technorati An internet search engine for searching blogs Features include Blog and post search : find blogs or posts on specific topics. Tag Feeds : sign up for RSS feeds from the latest blogosphere posts on your topic. State of the Blogosphere : annual series that chronicles the Blogosphere, since 2004.
9. Google™ Trends See how often topics have been searched on Google™ over time. See how frequently topics have appeared on Google™ News. See in what geographic region most people were searching for your topic.
10. Google™ Trends Compare up to 5 terms, separated by a comma. Use vertical bar [|] to see searches that contained either term. Use parentheses around multi-word terms. Use quotes to search terms in the specific order terms were entered. Data can be exported to a .csv file.
11. Example Search AT&T, Sprint, Verizon in the United States for the year 2008 The number you see next to your search term corresponds to its total average traffic in the time frame you’ve chosen. When comparing multiple search terms on a relative scale, the first term you enter will always be 1.0, as subsequent terms are ranked and scaled against this term. Google Trends also allows the user to compare the volume of searches between two or more terms, as well as the traffic to one or more websites
Professor Smith is teaching a class on marketing in the 21st century and is including social media. To assist the students, IBIC has put together a presentation on social mediaGoal: Two technologies that Objectives: Provide general definition of social media List social media technologies Select two technologies and show how to use them effectively
What is social media?As hard to define as Web 2.0 but for the sake of the presentation, this definition seems to be as popular as any of them
Nice graphic created by Mirna Bard.Comprehensive but not all inclusive and it includes the brand names of many technologies we see all the time.
Examples of social media technologiesNot all are suitable for doing research but this shows the wide range out there.
Overwhelmed? Surprised if you are not. But let’s examine 2 sources that can aid the business researcher
May be diverging a bit from the traditional sources used in library instruction. Using this to be interesting and provocative.
The rest of the presentation will highlight some sources for your doing your marketing/business research on social media using social media. Say you wanted to search blog posts on “web analytics” or “facebook marketing”? http://www.searchenginejournal.com/blog-search-engines-the-complete-overview/7856/Key performance indicators KPI
Need to register your blog, etc. If we have time, we can go to technorati and check out the features.
Besides using search technology to search blogs for your topic, you can also use Google Trends.
Google Trends analyzes a portion of Google web searches to compute how many searches have been done for the terms you enter, relative to the total number of searches done on Google over time. We then show you a graph with the results – our Search Volume Index graph.