Five steps to get tweets sent by a list of usersWeiai Wayne Xu
In this episode of Python tutorial, we teach you how to gather tweets sent by a list of Twitter users. Download the script at https://drive.google.com/file/d/0Bwwg6GLCW_IPVmNBMUV4bVhUU0U/edit?usp=sharing
Five Steps to Get Facebook Engagement IndicatorsWeiai Wayne Xu
In this tutorial, you will learn how to get:
1. # of likes, shares and comments for all posts on a Facebook page.
2. Content lifecycle: the amount of time for how long the content can drive user attention and engagement.
Five steps to get tweets sent by a list of usersWeiai Wayne Xu
In this episode of Python tutorial, we teach you how to gather tweets sent by a list of Twitter users. Download the script at https://drive.google.com/file/d/0Bwwg6GLCW_IPVmNBMUV4bVhUU0U/edit?usp=sharing
Five Steps to Get Facebook Engagement IndicatorsWeiai Wayne Xu
In this tutorial, you will learn how to get:
1. # of likes, shares and comments for all posts on a Facebook page.
2. Content lifecycle: the amount of time for how long the content can drive user attention and engagement.
This presentation illustrates how we can extract data from twitter with R programming language that can be any tweets i.e. sarcastic tweets, today's most prevalent tweets etc...
Browser extensions, sometimes referred to as plug-ins or add-ons, are tiny pieces of software that enhances one's Web browsing experience. If you can think of a common computing task, then a browser extension has probably already been written to more easily accomplish it. Although browser extensions are written for all major Web browsers, Chad Mairn will highlight how to find and install useful extensions on Chrome and Firefox browsers. Safari and Internet Explorer users don't fear because Chad Mairn will highlight a few great extensions there as well.
Introduction to Web Scraping with PythonOlga Scrivner
In this workshop, you will learn how to extract web data with Beautiful Soup, a Python library for extracting data out of HTML- and XML-structured documents. You will also learn the basics of scraping and parsing data. In this hands-on workshop, we will also be using the DataCamp platform and participants are requested to have a free account with DataCamp prior the workshop.
Mining Social Web APIs with IPython Notebook - Data Day Texas 2014Matthew Russell
Slides from a 2-hour workshop at Data Day Texas 2014 on how to mine social web APIs. This workshop specifically focused on extracting insight from Twitter data and was partitioned into two hour long segments. The first segment focused on familiarity with Twitter's API, while the latter segment focused on using pandas to extract insight from tweets from the firehose via the Streaming API.
How Do We Fight Email Phishing? (ICA2015 - San Juan, PR) Weiai Wayne Xu
A research presentation made in ICA2015:
Email phishing poses a grave security threat to national governments, business operations and average Internet users. The current study presents a model of counter-phishing protective behaviour based on email users' cognitive appraisal of phishing threat and systematic/heuristic processing of email content. The model integrates the theoretical frameworks of protection motivation theory and heuristic-systematic model of information processing. Findings suggest that protective behaviour against email phishing is predicted by a high degree of perceived severity and vulnerability towards cybersecurity risks, a high level of self-efficacy in performing cybersecurity checks and efficacy in responding using cybersecurity tools. The perceived severity, along with the elevated degree of protective behaviour, predicts systematic processing of phishing emails.
SlideShare is a Web 2.0 based slide hosting service. Users can upload files privately or publicly in the following file formats: PowerPoint, PDF, Keynote or OpenDocument presentations. Slide decks can then be viewed on the site itself, on hand held devices or embedded on other sites.
This article is about using Serverless platform OpenWhisk. The example shows how to do auto retweeting in Python to illustrate an application of serverless approach. Originally published in October 2017 edition of Open Source For You magazine - shared under CC BY SA-3.0 License.
Python is one of the powerful, high-level, easy to learn programming language that
provides a huge number of applications. Some of its features, such as being object-oriented
and open source, having numerous IDE’s, etc. make it one of the most in-demand
programming languages of the present IT industry.
According to TIOBE index, as of January 2020, Python is one of the popular programming
languages. By looking at the popularity of this programming language, many IT
professionals, both beginners as well as experienced alike, are willing to build their career
as a Python developer
This presentation illustrates how we can extract data from twitter with R programming language that can be any tweets i.e. sarcastic tweets, today's most prevalent tweets etc...
Browser extensions, sometimes referred to as plug-ins or add-ons, are tiny pieces of software that enhances one's Web browsing experience. If you can think of a common computing task, then a browser extension has probably already been written to more easily accomplish it. Although browser extensions are written for all major Web browsers, Chad Mairn will highlight how to find and install useful extensions on Chrome and Firefox browsers. Safari and Internet Explorer users don't fear because Chad Mairn will highlight a few great extensions there as well.
Introduction to Web Scraping with PythonOlga Scrivner
In this workshop, you will learn how to extract web data with Beautiful Soup, a Python library for extracting data out of HTML- and XML-structured documents. You will also learn the basics of scraping and parsing data. In this hands-on workshop, we will also be using the DataCamp platform and participants are requested to have a free account with DataCamp prior the workshop.
Mining Social Web APIs with IPython Notebook - Data Day Texas 2014Matthew Russell
Slides from a 2-hour workshop at Data Day Texas 2014 on how to mine social web APIs. This workshop specifically focused on extracting insight from Twitter data and was partitioned into two hour long segments. The first segment focused on familiarity with Twitter's API, while the latter segment focused on using pandas to extract insight from tweets from the firehose via the Streaming API.
How Do We Fight Email Phishing? (ICA2015 - San Juan, PR) Weiai Wayne Xu
A research presentation made in ICA2015:
Email phishing poses a grave security threat to national governments, business operations and average Internet users. The current study presents a model of counter-phishing protective behaviour based on email users' cognitive appraisal of phishing threat and systematic/heuristic processing of email content. The model integrates the theoretical frameworks of protection motivation theory and heuristic-systematic model of information processing. Findings suggest that protective behaviour against email phishing is predicted by a high degree of perceived severity and vulnerability towards cybersecurity risks, a high level of self-efficacy in performing cybersecurity checks and efficacy in responding using cybersecurity tools. The perceived severity, along with the elevated degree of protective behaviour, predicts systematic processing of phishing emails.
SlideShare is a Web 2.0 based slide hosting service. Users can upload files privately or publicly in the following file formats: PowerPoint, PDF, Keynote or OpenDocument presentations. Slide decks can then be viewed on the site itself, on hand held devices or embedded on other sites.
This article is about using Serverless platform OpenWhisk. The example shows how to do auto retweeting in Python to illustrate an application of serverless approach. Originally published in October 2017 edition of Open Source For You magazine - shared under CC BY SA-3.0 License.
Python is one of the powerful, high-level, easy to learn programming language that
provides a huge number of applications. Some of its features, such as being object-oriented
and open source, having numerous IDE’s, etc. make it one of the most in-demand
programming languages of the present IT industry.
According to TIOBE index, as of January 2020, Python is one of the popular programming
languages. By looking at the popularity of this programming language, many IT
professionals, both beginners as well as experienced alike, are willing to build their career
as a Python developer
Python Basics: A Complete Introduction to Python3.Your Complete Python Curriculum— With Exercises, Interactive Quizzeresources, and Sample python programs.Python for beginners to excel in their careers in programming
Tweepy is an open source Python package that gives you a very convenient way to access the Twitter API with Python. Tweepy includes a set of classes and methods that represent Twitter's models and API endpoints, and it transparently handles various implementation details, such as: Data encoding and decoding.
Python is a high-level, object-oriented, interpreted programming language, which has garnered worldwide attention. Stack Overflow found out that 38.8% of its users mainly use Python for their projects. According to the website’s survey, Python’s popularity surpassed that of C# in 2018 – just like it surpassed PHP in 2017. On the GitHub platform, Python surpassed Java as the second-most used programming language, with 40% more pull requests opened in 2017 than in 2016. This makes Python certification one of the most sought-after programming certifications.
Unit 8 - Information and Communication Technology (Paper I).pdfThiyagu K
This slides describes the basic concepts of ICT, basics of Email, Emerging Technology and Digital Initiatives in Education. This presentations aligns with the UGC Paper I syllabus.
The French Revolution, which began in 1789, was a period of radical social and political upheaval in France. It marked the decline of absolute monarchies, the rise of secular and democratic republics, and the eventual rise of Napoleon Bonaparte. This revolutionary period is crucial in understanding the transition from feudalism to modernity in Europe.
For more information, visit-www.vavaclasses.com
Biological screening of herbal drugs: Introduction and Need for
Phyto-Pharmacological Screening, New Strategies for evaluating
Natural Products, In vitro evaluation techniques for Antioxidants, Antimicrobial and Anticancer drugs. In vivo evaluation techniques
for Anti-inflammatory, Antiulcer, Anticancer, Wound healing, Antidiabetic, Hepatoprotective, Cardio protective, Diuretics and
Antifertility, Toxicity studies as per OECD guidelines
Macroeconomics- Movie Location
This will be used as part of your Personal Professional Portfolio once graded.
Objective:
Prepare a presentation or a paper using research, basic comparative analysis, data organization and application of economic information. You will make an informed assessment of an economic climate outside of the United States to accomplish an entertainment industry objective.
Safalta Digital marketing institute in Noida, provide complete applications that encompass a huge range of virtual advertising and marketing additives, which includes search engine optimization, virtual communication advertising, pay-per-click on marketing, content material advertising, internet analytics, and greater. These university courses are designed for students who possess a comprehensive understanding of virtual marketing strategies and attributes.Safalta Digital Marketing Institute in Noida is a first choice for young individuals or students who are looking to start their careers in the field of digital advertising. The institute gives specialized courses designed and certification.
for beginners, providing thorough training in areas such as SEO, digital communication marketing, and PPC training in Noida. After finishing the program, students receive the certifications recognised by top different universitie, setting a strong foundation for a successful career in digital marketing.
Introduction to AI for Nonprofits with Tapp NetworkTechSoup
Dive into the world of AI! Experts Jon Hill and Tareq Monaur will guide you through AI's role in enhancing nonprofit websites and basic marketing strategies, making it easy to understand and apply.
A Strategic Approach: GenAI in EducationPeter Windle
Artificial Intelligence (AI) technologies such as Generative AI, Image Generators and Large Language Models have had a dramatic impact on teaching, learning and assessment over the past 18 months. The most immediate threat AI posed was to Academic Integrity with Higher Education Institutes (HEIs) focusing their efforts on combating the use of GenAI in assessment. Guidelines were developed for staff and students, policies put in place too. Innovative educators have forged paths in the use of Generative AI for teaching, learning and assessments leading to pockets of transformation springing up across HEIs, often with little or no top-down guidance, support or direction.
This Gasta posits a strategic approach to integrating AI into HEIs to prepare staff, students and the curriculum for an evolving world and workplace. We will highlight the advantages of working with these technologies beyond the realm of teaching, learning and assessment by considering prompt engineering skills, industry impact, curriculum changes, and the need for staff upskilling. In contrast, not engaging strategically with Generative AI poses risks, including falling behind peers, missed opportunities and failing to ensure our graduates remain employable. The rapid evolution of AI technologies necessitates a proactive and strategic approach if we are to remain relevant.
June 3, 2024 Anti-Semitism Letter Sent to MIT President Kornbluth and MIT Cor...Levi Shapiro
Letter from the Congress of the United States regarding Anti-Semitism sent June 3rd to MIT President Sally Kornbluth, MIT Corp Chair, Mark Gorenberg
Dear Dr. Kornbluth and Mr. Gorenberg,
The US House of Representatives is deeply concerned by ongoing and pervasive acts of antisemitic
harassment and intimidation at the Massachusetts Institute of Technology (MIT). Failing to act decisively to ensure a safe learning environment for all students would be a grave dereliction of your responsibilities as President of MIT and Chair of the MIT Corporation.
This Congress will not stand idly by and allow an environment hostile to Jewish students to persist. The House believes that your institution is in violation of Title VI of the Civil Rights Act, and the inability or
unwillingness to rectify this violation through action requires accountability.
Postsecondary education is a unique opportunity for students to learn and have their ideas and beliefs challenged. However, universities receiving hundreds of millions of federal funds annually have denied
students that opportunity and have been hijacked to become venues for the promotion of terrorism, antisemitic harassment and intimidation, unlawful encampments, and in some cases, assaults and riots.
The House of Representatives will not countenance the use of federal funds to indoctrinate students into hateful, antisemitic, anti-American supporters of terrorism. Investigations into campus antisemitism by the Committee on Education and the Workforce and the Committee on Ways and Means have been expanded into a Congress-wide probe across all relevant jurisdictions to address this national crisis. The undersigned Committees will conduct oversight into the use of federal funds at MIT and its learning environment under authorities granted to each Committee.
• The Committee on Education and the Workforce has been investigating your institution since December 7, 2023. The Committee has broad jurisdiction over postsecondary education, including its compliance with Title VI of the Civil Rights Act, campus safety concerns over disruptions to the learning environment, and the awarding of federal student aid under the Higher Education Act.
• The Committee on Oversight and Accountability is investigating the sources of funding and other support flowing to groups espousing pro-Hamas propaganda and engaged in antisemitic harassment and intimidation of students. The Committee on Oversight and Accountability is the principal oversight committee of the US House of Representatives and has broad authority to investigate “any matter” at “any time” under House Rule X.
• The Committee on Ways and Means has been investigating several universities since November 15, 2023, when the Committee held a hearing entitled From Ivory Towers to Dark Corners: Investigating the Nexus Between Antisemitism, Tax-Exempt Universities, and Terror Financing. The Committee followed the hearing with letters to those institutions on January 10, 202
Read| The latest issue of The Challenger is here! We are thrilled to announce that our school paper has qualified for the NATIONAL SCHOOLS PRESS CONFERENCE (NSPC) 2024. Thank you for your unwavering support and trust. Dive into the stories that made us stand out!
Curiosity Bits Tutorial: Mining Twitter User Profile on Python V2
1. Created by The Curiosity Bits Blog (curiositybits.com)
Download the Python code used in the tutorial
Codes provided by Dr. Gregory D. Saxton
Mining Twitter User Profile on
Python
1
2. Prerequisite
Setting up API keys: pg.4-6
Installing necessary Python libraries: pg.7-8
Creating a list ofTwitter screen-names: pg.9
Setting up a SQLite Database to storeTwitter data: pg.10-14
But, if you are a Python newbie, so let’s start with the
very basics.
2
3. We assume you are a Python newbie, so let’s start with the
very basics.
• Choosing the right Python platform: Python is a programing
language, but you can use different software packages to write, edit
and run Python codes. We choose Anaconda which is free to
download, and the Python version is 2.7.
• Once you install Anaconda, you can play around Python codes in
Spyder
3
4. Setting up API keys
• We need keys to getTwitter data throughTwitter API
(https://dev.twitter.com/).You need: API Key, API Secret, Access token,
Access token secret.
• First, go to https://dev.twitter.com/, and sign in yourTwitter account. Go
to my applications page to create an application.
4
5. Enter any name that makes sense to
you
Enter any text that makes sense to
you
you can enter any legitimate URL, here, I put in
the URL of my institution.
Same as above, you can enter any legitimate URL,
here, I put in the URL of my institution.
Setting up API keys
5
6. • After creating the app, go to API Keys page, scroll down to the
bottom and click Create my access token. Wait for a few minutes
and refresh the page, then you get all your keys!
Setting up API keys
you need API Key, API Secret, Access token, Access token secret.
6
7. Installing necessary Python libraries
Think of Python libraries as the apps running on your operating
system.To use our code, you need the following libraries:
• Simplejson (https://pypi.python.org/pypi/simplejson)
• Sqlite3 (http://sqlite.org/)
• Sqlalchemy (http://www.sqlalchemy.org/)
• Twython
(https://twython.readthedocs.org/en/latest/index.html)
7
8. Installing necessary Python libraries
To install the libraries, go to Start menu and type in CMD and run the CMD file as
administrator. Once you are on CMD, type in the command line pip install, followed by the
name of Python library. For example, to install Twython, you need to type pip install
twython, and press enter. Use this procedure to Install all necessary libraries.
8
9. • Our Python code enables gathering profile information for multiple
Twitter users. So, first let’s create a list of users.The list should be in
.csv format and contains three columns (in accordance to the
configuration in our Python code). Specially, it looks like this:
Creating a list ofTwitter screen-names
The first column lists sequential
numbers
the second column listsTwitter
screen-names you are interested
in
For the third column, I entered 1
all throughout, but you can leave
it blank.
9
10. Setting up a SQLite Database to storeTwitter data
You need a storage for incoming data fromTwitterAPI.That
is what databases are for.We use SQLite, a Python library
based on SQL. SQL is a common relational database
management system (RDBMS). In previous steps, you have
installed this sqlite library (sqlite3). On top of that, you can
download a database browser to view and edit the database
just like an Excel file.
Go to http://sqlitebrowser.sourceforge.net/ and download
SQLite Database Browser. It allows you to view and edit
SQLite databases. 10
11. Setting up a SQLite Database to storeTwitter data
Once you have the files downloaded, run the following file.
11
12. Setting up a SQLite Database to storeTwitter data
Now, we need to import theTwitter users list into a SQLite database.To do that,
create a new database. Remember the database file name because we need to
write that into Python code.
The default file extension for sqlite is .sqlite, to prevent future complications,
add the extension .sqlite when you save a file in SQLite database browser,.
12
13. File-Import-Table From CSV File, import the
.csv file you saved. Name the imported table as
accounts.This table name corresponds to the
one we will use in Python code. After you click
create, the csv list will be loaded into the
database, and you can browse it in Browse
Data. Lastly, remember to save the database.
Setting up a SQLite Database to storeTwitter data
Stay on the database file you just created.
13
14. Setting up a SQLite Database to storeTwitter data
Now, we need to modify the imported table.
Go to Edit-ModifyTables, then use Edit field
to change column names.To correspond to our
Python code, name the first column as rowed,
and FiledType as Integer; the second column
as screen_name, and Field type String, and the
third as user_type, and String. In the end, the
database table is defined as the screen-shoted.
14
15. Now, moving on to the actual Python code…
Download the Python code, and open it inAnaconda
15
16. There are only a few places you need to change, but let’s
walk through the code first…
The first block of code is to import necessary Python libraries
Make sure you have
installed all these
necessary libraries
16
17. The second block is where you need to enter the keys we have obtained in the
beginning. Just copy and paste the keys inside quotation mark.
API Key
API secret
Access token
Access token secret
17
18. The third block is where we define columns in SQLite database. For now, we do not
need to edit anything here.
18
19. The fourth block is where we ask the Python code to getTwitter user profile
information based on a list of users already saved in SQLite database. Here, you will
see that table names and the column names correspond to the ones we previously
saved in SQLite.
19
20. The fifth block is where we make specific request throughTwitter API to
get data:
Here, we ask Python to
get one recent status
from the listed user.This
procedure returns the
user’s profile
information.We will
discuss what profile
information is available
later on.
20
21. The raw output fromTwitter API is in JSON format. JSON is a standardized way of
storing information. Now we need to map the information in JSON format to the
tables in database. Notice that each column in the database represents aTwitter
output variable.
e.g. A Twitter user’s profile description is
stored as description under user in
JSON. This line of code maps the
profile description in JSON to the
database column named
from_user_description.
21
22. You need to change the file path and file name here
(RECOMMENDED).
If the Python file and your SQLite database are in the
same folder, just paste your database name here.
22
23. Now, you are ready to run the code. Go to Run, and choose Execute in a new dedicated
Python interpreter. The first option Execute in current Python or IPython interpreter
does not work on my end, but may be working on your computer.
23
24. Now, look at the right-side bar in Anaconda.
Oops, looks like I am getting error messages!
ERRORS!!
Don’t panic! Its likely you will hit roadblocks
when you run Python codes. So, it is important
to learn to debug.
For this error, it is likely because I saved the
Python file in a folder that is not a default
Python folder.
But what is default Python folder ?
24
25. the simple way to find out your default
Python folder is
• On a WINDOWS machine, In Start menu, right-click the Computer
and choose Properties
25
27. In my case, C:AnacondaLibsite-packages is my default Python folder. So I moved the
Python code there, edited the file path in the code, and ran it. Here you go, the code is
running and is getting what we want! If you go check the database file, you will see a
new table named typhoon is created (you can change the table name in the Python
code), and it includes the listed users’ recent tweets and profile information.
27
28. Oops! Error again!
Twitter API has rate limit.
Based on the version ofTwitter API in our
Python code, you can get 300ish users per
15 minutes. Once you hit the limit, you
will see the error message shown in the
screenshot.
There are two ways to deal with the
restriction:
1. wait for 15 minutes for another run;
2. create multipleTwitter apps and get
multiple keys. Once you use up the quota
in one run, paste in a new key to start a
new run!
28
29. If putting 0 here, the code starts with the user listed in the first row.
Because we will hit rate limit, you will need to run the code multiple times
to complete crawling all users on the list. Make sure to change the starting
row number!
For example, in the first run, you get user (0) to user (150), and hit rate
limit.You should put 151 in the second run to start with the user listed on
the 150th row. 29
30. A list ofTwitter output variables
Go to SQLite Database Browser and select the table typhoon (again, this is the name we
gave in Python code).You will see output variables across columns.
30
31. A list ofTwitter output variables
Some key variables related to user profile:
• from_user_screen_name: user’sTwitter screen-name
• from_user_followers_count: how many people are following the user
• from_user_friends_count: how many people this user is following
• from_user_listed_count: how many times the user is listed in other users’ public
lists
• from_user_favourites_count: how many times the user is favored (liked) by
other users
• from_user_statuses_count: how many tweets has the user sent
• from_user_description: the user’s profile bio
• from_user_location: location
• from_user_created_at: when is the account created
31
32. A list ofTwitter output variables
File – Export –Table as CSV to export the data into csv. format. Make sure to
add the .csv file extension name.
32
33. Please send your questions and comments to
weiaixu [at] buffalo dot edu
33