Instagram is in the top 5 most visited websites, perhaps not for our industry. Nevertheless, we are going to test this hypothesis using Python and our data analytics skills. In this post, we will share how to collect social media data using the Instagram API.
Data collection method
The Instagram API won’t let us collect data about other platform users for no reason, but there is always a way. Try sending the following request:
https://instagram.com/leftjoin/?__a=1
The request returns a JSON object with detailed user information, for instance, we can easily get an account name, number of posts, followers, subscriptions, as well as the first ten user posts with likes count, comments and etc. The pyInstagram library allows sending such requests.
SQL schema
Data will be collected into thee Clickhouse tables: users, posts, comments. The users table will contain user data, such as user id, username, user’s first and last name, account description, number of followers, subscriptions, posts, comments, and likes, whether an account is verified or not, and so on.
CREATE TABLE instagram.users
(
`added_at` DateTime,
`user_id` UInt64,
`user_name` String,
`full_name` String,
`base_url` String,
`biography` String,
`followers_count` UInt64,
`follows_count` UInt64,
`media_count` UInt64,
`total_comments` UInt64,
`total_likes` UInt64,
`is_verified` UInt8,
`country_block` UInt8,
`profile_pic_url` Nullable(String),
`profile_pic_url_hd` Nullable(String),
`fb_page` Nullable(String)
)
ENGINE = ReplacingMergeTree
ORDER BY added_at
The posts table will be populated with the post owner name, post id, caption, comments coun, and so on. To check whether a post is an advertisement, Instagram carousel, or a video we can use these fields: is_ad, is_album and is_video.
CREATE TABLE instagram.posts
(
`added_at` DateTime,
`owner` String,
`post_id` UInt64,
`caption` Nullable(String),
`code` String,
`comments_count` UInt64,
`comments_disabled` UInt8,
`created_at` DateTime,
`display_url` String,
`is_ad` UInt8,
`is_album` UInt8,
`is_video` UInt8,
`likes_count` UInt64,
`location` Nullable(String),
`recources` Array(String),
`video_url` Nullable(String)
)
ENGINE = ReplacingMergeTree
ORDER BY added_at
In the comments table, we store each comment separately with the comment owner and text.
CREATE TABLE instagram.comments
(
`added_at` DateTime,
`comment_id` UInt64,
`post_id` UInt64,
`comment_owner` String,
`comment_text` String
)
ENGINE = ReplacingMergeTree
ORDER BY added_at
Writing the script
Import the following classes from the library: Account, Media, WebAgent and Comment.
from instagram import Account, Media, WebAgent, Comment
from datetime import datetime
from clickhouse_driver import Client
import requests
import pandas as pd
Next, create an instance of the WebAgent class required for some library methods and data updating. To collect any meaningful information we need to have at least account names. Since we don’t have them yet, send the following request to search for porifles by the keywords specified in queries_list. The search results will be composed of Instagram pages that match any keyword in the list.
agent = WebAgent()
queries_list = ['machine learning', 'data science', 'data analytics', 'analytics', 'business intelligence',
'data engineering', 'computer science', 'big data', 'artificial intelligence',
'deep learning', 'data scientist','machine learning engineer', 'data engineer']
client = Client(host='12.34.56.789', user='default', password='', port='9000', database='instagram')
url = 'https://www.instagram.com/web/search/topsearch/?context=user&count=0'
Let’s iterate the keywords collecting all matching accounts. Then remove duplicates from the obtained list by converting it to set and back.
response_list = []
for query in queries_list:
response = requests.get(url, params={
'query': query
}).json()
response_list.extend(response['users'])
instagram_pages_list = []
for item in response_list:
instagram_pages_list.append(item['user']['username'])
instagram_pages_list = list(set(instagram_pages_list))
Now we need to loop through the list of pages and request detailed information about an account if it’s not in the table yet. Create an instance of the Account class and pass username as a parameter.
Then update the account information using the agent.update()
method. We will collect only the first 100 posts to keep it moving. Next, create a list named media_list to store received post ids after calling the agent.get_media() method.
Collecting user media data
all_posts_list = []
username_count = 0
for username in instagram_pages_list:
if client.execute(f"SELECT count(1) FROM users WHERE user_name='{username}'")[0][0] == 0:
print('username:', username_count, '/', len(instagram_pages_list))
username_count += 1
account_total_likes = 0
account_total_comments = 0
try:
account = Account(username)
except Exception as E:
print(E)
continue
try:
agent.update(account)
except Exception as E:
print(E)
continue
if account.media_count < 100:
post_count = account.media_count
else:
post_count = 100
print(account, post_count)
media_list, _ = agent.get_media(account, count=post_count, delay=1)
count = 0
Because we need to count the total number of likes and comments before adding a new user to our database, we’ll start with them first. Almost all required fields belong to the Media class:
Collecting user posts
for media_code in media_list:
if client.execute(f"SELECT count(1) FROM posts WHERE code='{media_code}'")[0][0] == 0:
print('posts:', count, '/', len(media_list))
count += 1
post_insert_list = []
post = Media(media_code)
agent.update(post)
post_insert_list.append(datetime.now().strftime('%Y-%m-%d %H:%M:%S'))
post_insert_list.append(str(post.owner))
post_insert_list.append(post.id)
if post.caption is not None:
post_insert_list.append(post.caption.replace("'","").replace('"', ''))
else:
post_insert_list.append("")
post_insert_list.append(post.code)
post_insert_list.append(post.comments_count)
post_insert_list.append(int(post.comments_disabled))
post_insert_list.append(datetime.fromtimestamp(post.date).strftime('%Y-%m-%d %H:%M:%S'))
post_insert_list.append(post.display_url)
try:
post_insert_list.append(int(post.is_ad))
except TypeError:
post_insert_list.append('cast(Null as Nullable(UInt8))')
post_insert_list.append(int(post.is_album))
post_insert_list.append(int(post.is_video))
post_insert_list.append(post.likes_count)
if post.location is not None:
post_insert_list.append(post.location)
else:
post_insert_list.append('')
post_insert_list.append(post.resources)
if post.video_url is not None:
post_insert_list.append(post.video_url)
else:
post_insert_list.append('')
account_total_likes += post.likes_count
account_total_comments += post.comments_count
try:
client.execute(f'''
INSERT INTO posts VALUES {tuple(post_insert_list)}
''')
except Exception as E:
print('posts:')
print(E)
print(post_insert_list)
Store comments in the variable with the same name after calling the get_comments() method:
Collecting post comments
comments = agent.get_comments(media=post)
for comment_id in comments[0]:
comment_insert_list = []
comment = Comment(comment_id)
comment_insert_list.append(datetime.now().strftime('%Y-%m-%d %H:%M:%S'))
comment_insert_list.append(comment.id)
comment_insert_list.append(post.id)
comment_insert_list.append(str(comment.owner))
comment_insert_list.append(comment.text.replace("'","").replace('"', ''))
try:
client.execute(f'''
INSERT INTO comments VALUES {tuple(comment_insert_list)}
''')
except Exception as E:
print('comments:')
print(E)
print(comment_insert_list)
And now, when we have obtained user posts and comments new information can be added to the table.
Collecting user data
user_insert_list = []
user_insert_list.append(datetime.now().strftime('%Y-%m-%d %H:%M:%S'))
user_insert_list.append(account.id)
user_insert_list.append(account.username)
user_insert_list.append(account.full_name)
user_insert_list.append(account.base_url)
user_insert_list.append(account.biography)
user_insert_list.append(account.followers_count)
user_insert_list.append(account.follows_count)
user_insert_list.append(account.media_count)
user_insert_list.append(account_total_comments)
user_insert_list.append(account_total_likes)
user_insert_list.append(int(account.is_verified))
user_insert_list.append(int(account.country_block))
user_insert_list.append(account.profile_pic_url)
user_insert_list.append(account.profile_pic_url_hd)
if account.fb_page is not None:
user_insert_list.append(account.fb_page)
else:
user_insert_list.append('')
try:
client.execute(f'''
INSERT INTO users VALUES {tuple(user_insert_list)}
''')
except Exception as E:
print('users:')
print(E)
print(user_insert_list)
Conclusion
To sum up, we have collected data of 500 users, with nearly 20K posts and 40K comments. As the database will be updated, we can write a simple query to get the top 10 ML, AI & Data Science related most followed accounts for today.
SELECT *
FROM users
ORDER BY followers_count DESC
LIMIT 10
And as a bonus, here is a list of the most interesting Instagram accounts on this topic:
- @ai_machine_learning
- @neuralnine
- @datascienceinfo
- @compscistuff
- @computersciencelife
- @welcome.ai
- @papa_programmer
- @data_science_learn
- @neuralnet.ai
- @techno_thinkers
View the code on GitHub