readme

2
SUMMARY: We have built two services: 1) The first service pulls data for a specific keyword from Twitter using Twitter's REST APIs (Search). These tweets are then analyzed for sentiments using the sentiment analyzer. Thereafter the result of the analysis is stored in a database. 2) The second service provides a REST interface that allows users to query the analyzed data. This service gives aggregated value of respective sentiments. A Twitter application was created to get the Consumer Key, Consumer Secret, Access Token and Access Token Secret from the site apps.twitter.com. A particular number of tweets were extracted and stored in Sqlite3 database. The tweets were collected through the Twitter website by using the individual Consumer key, Consumer Secret (API Secret). REFERENCES: To acknowledge use of the concepts and logics in this project, please go through the following websites: 1) http://stackoverflow.com/questions/24214189/how-can-i-get-tweets- older-than-a-week-using-tweepy-or-other-python-libraries 2) http://stackoverflow.com/questions/15628535/how-can-i-retrieve-all- tweets-and-attributes-for-a-given-user-using-python 3) http://stackoverflow.com/questions/31164610/connect-to-sqlite3- server-using-pyodbc-python 4) https://dev.twitter.com/rest/public/search 5) http://stackoverflow.com/questions/14209868/how-to-work-with- sqlite3-and-python 6) textblob.readthedocs.org/en/dev/quickstart.html 7) https://impythonist.wordpress.com/2015/07/12/build-an-api-under-30- lines-of-code-with-python-and-flask/ 8) http://textblob.readthedocs.org/en/dev/quickstart.html

Upload: sumit-suman

Post on 16-Apr-2017

8 views

Category:

Documents


3 download

TRANSCRIPT

Page 1: README

SUMMARY:

We have built two services:

1) The first service pulls data for a specific keyword from Twitter using Twitter's REST APIs (Search). These tweets are then analyzed for sentiments using the sentiment analyzer. Thereafter the result of the analysis is stored in a database.

2) The second service provides a REST interface that allows users to query the analyzed data. This service gives aggregated value of respective sentiments. A Twitter application was created to get the Consumer Key, Consumer Secret, Access Token and Access Token Secret from the site apps.twitter.com. A particular number of tweets were extracted and stored in Sqlite3 database. The tweets were collected through the Twitter website by using the individual Consumer key, Consumer Secret (API Secret).

REFERENCES:

To acknowledge use of the concepts and logics in this project, please go through the following websites:

1) http://stackoverflow.com/questions/24214189/how-can-i-get-tweets-older-than-a-week-using-tweepy-or-other-python-libraries

2) http://stackoverflow.com/questions/15628535/how-can-i-retrieve-all-tweets-and-attributes-for-a-given-user-using-python

3) http://stackoverflow.com/questions/31164610/connect-to-sqlite3-server-using-pyodbc-python

4) https://dev.twitter.com/rest/public/search

5) http://stackoverflow.com/questions/14209868/how-to-work-with-sqlite3-and-python

6) textblob.readthedocs.org/en/dev/quickstart.html

7) https://impythonist.wordpress.com/2015/07/12/build-an-api-under-30-lines-of-code-with-python-and-flask/

8) http://textblob.readthedocs.org/en/dev/quickstart.html

FURTHER INFORMATION ABOUT THE PROJECT:

The Mini Project is a project based on Twitter Data Analysis which involves Extracting Twitter data based on the "Keyword" the user wants, Storing Twitter Data and analyzing the Twitter Data.

The project is completed by Sumit Suman and Manish Pujapanda.

The technologies used in this project are Python for code execution and Sqlite3 for storing tweets.

Page 2: README

DETAILED DESCRIPTIONS OF DATA-FILES:

Here are brief descriptions of the data.

1) Project_Twitter.py

The tweets are fetched based on a particular keyword, and the database is created to store the tweets. Sentiment Analysis is done on the retrieved tweets.

Information added into the table:

1. Serial Number

2. Tweets created at (Time at which the tweet was created)

3. Description of the tweet

4. Sentiment

5. Polarity

Then the table is printed.

2) Sentiment_Analyser.py

The tweets are analyzed for positive, negative and neutral tweets.

3) Application.py

The URL structure is given to web.py and then a class is created to get the function for query and set the sqlite3 database connection. After this a query is written to fetch all the data from the table and the length of the number of rows is returned. Thereafter we write a query to fetch the number of all the positive and negative sentiments and print it along with taking out the aggregate.