Posts Tagged: "python beautifulsoup"

Web Scraping – BeautifulSoup Python

- - Python, Tutorials

Data collection from public sources is often beneficial to a business or an individual. As such the term “web scraping” isn’t something new. These data are often wrangled within html tags and attributes. Python is often used for data collection from these sources. The intentions of this post is to host example code snippets so people can take ideas from it to build scrapers as per their needs using BeautifulSoup and urllib module in Python. I will be using github’s trending page https://github.com/trending throughout this post for the examples, especially because it best suits for applying various BeautifulSoup methods.

Installation:

pip install BeautifulSoup4

Get html of a page:
>>> import urllib
>>> resp = urllib.request.urlopen("https://github.com/trending")
>>> resp.getcode()
200
>>> resp.read() # the html
Using BeautifulSoup to get title from a page
>>> import urllib
>>> import bs4
>>> github_trending = urllib.request.urlopen("https://github.com/trending")
>>> trending_soup = bs4.BeautifulSoup(github_trending.read(), "lxml")
>>> trending_soup.title
<title>Trending  repositories on GitHub today · GitHub</title>
>>> trending_soup.title.string
'Trending  repositories on GitHub today · GitHub'
>>>
Find single element by tag name, find multiple elements by tag name
>>> ordered_list = trending_soup.find('ol') #single element
>>>
>>> type(ordered_list)
<class 'bs4.element.Tag'>
>>>
>>> all_li = ordered_list.find_all('li') # multiple elements
>>>
>>> type(all_li)
<class 'bs4.element.ResultSet'>
>>>
>>> trending_repositories = [each_list.find('h3').text for each_list in all_li]
>>> for each_repository in trending_repositories:
...     print(each_repository.strip())
...
klauscfhq / taskbook
robinhood / faust
Avik-Jain / 100-Days-Of-ML-Code
jxnblk / mdx-deck
faressoft / terminalizer
trekhleb / javascript-algorithms
apexcharts / apexcharts.js
grain-lang / grain
thedaviddias / Front-End-Performance-Checklist
istio / istio
CyC2018 / Interview-Notebook
fivethirtyeight / russian-troll-tweets
boyerjohn / rapidstring
donnemartin / system-design-primer
awslabs / aws-cdk
QUANTAXIS / QUANTAXIS
crossoverJie / Java-Interview
GoogleChromeLabs / ndb
dylanbeattie / rockstar
vuejs / vue
sbussard / canvas-sketch
Microsoft / vscode
flutter / flutter
tensorflow / tensorflow
Snailclimb / Java-Guide
>>>
Getting Attributes of an element
>>> for each_list in all_li:
...     anchor_element = each_list.find('a')
...     print("https://github.com" + anchor_element['href'])
...
https://github.com/klauscfhq/taskbook
https://github.com/robinhood/faust
https://github.com/Avik-Jain/100-Days-Of-ML-Code
https://github.com/jxnblk/mdx-deck
https://github.com/faressoft/terminalizer
https://github.com/trekhleb/javascript-algorithms
https://github.com/apexcharts/apexcharts.js
https://github.com/grain-lang/grain
https://github.com/thedaviddias/Front-End-Performance-Checklist
https://github.com/istio/istio
https://github.com/CyC2018/Interview-Notebook
https://github.com/fivethirtyeight/russian-troll-tweets
https://github.com/boyerjohn/rapidstring
https://github.com/donnemartin/system-design-primer
https://github.com/awslabs/aws-cdk
https://github.com/QUANTAXIS/QUANTAXIS
https://github.com/crossoverJie/Java-Interview
https://github.com/GoogleChromeLabs/ndb
https://github.com/dylanbeattie/rockstar
https://github.com/vuejs/vue
https://github.com/sbussard/canvas-sketch
https://github.com/Microsoft/vscode
https://github.com/flutter/flutter
https://github.com/tensorflow/tensorflow
https://github.com/Snailclimb/Java-Guide
>>>
Using class name or other attributes to get element
>>> for each_list in all_li:
...     total_stars_today = each_list.find(attrs={'class':'float-sm-right'}).text
...     print(total_stars_today.strip())
...
1,063 stars today
846 stars today
596 stars today
484 stars today
459 stars today
429 stars today
443 stars today
366 stars today
330 stars today
282 stars today
182 stars today
190 stars today
200 stars today
190 stars today
166 stars today
164 stars today
144 stars today
158 stars today
157 stars today
144 stars today
144 stars today
142 stars today
132 stars today
101 stars today
108 stars today
>>>
Navigate childrens from an element
>>> for each_children in ordered_list.children:
...     print(each_children.find('h3').text.strip())
...
klauscfhq / taskbook
robinhood / faust
Avik-Jain / 100-Days-Of-ML-Code
jxnblk / mdx-deck
faressoft / terminalizer
trekhleb / javascript-algorithms
apexcharts / apexcharts.js
grain-lang / grain
thedaviddias / Front-End-Performance-Checklist
istio / istio
CyC2018 / Interview-Notebook
fivethirtyeight / russian-troll-tweets
boyerjohn / rapidstring
donnemartin / system-design-primer
awslabs / aws-cdk
QUANTAXIS / QUANTAXIS
crossoverJie / Java-Interview
GoogleChromeLabs / ndb
dylanbeattie / rockstar
vuejs / vue
sbussard / canvas-sketch
Microsoft / vscode
flutter / flutter
tensorflow / tensorflow
Snailclimb / Java-Guide
>>>

The .children will only return the immediate childrens of the parent element. If you’d like to get all of the elements under certain element, you should use .descendent

Navigate descendents from an element
>>> for each_children in ordered_list.descendent:
...     # perform operations
Navigating previous and next siblings of elements
>>> all_li = ordered_list.find_all('li')
>>> fifth_li = all_li[4]
>>> # each li element is separated by '\n' and hence to navigate to the fourth li, we should navigate previous sibling twice
...
>>>
>>> fourth_li = fifth_li.previous_sibling.previous_sibling
>>> fourth_li.find('h3').text.strip()
'jxnblk / mdx-deck'
>>>
>>> # similarly for navigating to the sixth li from fifth li, we would use next_sibling
...
>>> sixth_li = fifth_li.next_sibling.next_sibling
>>> sixth_li.find('h3').text.strip()
'trekhleb / javascript-algorithms'
>>>
Navigate to parent of an element
>>> all_li = ordered_list.find_all('li')
>>> first_li = all_li[0]
>>> li_parent = first_li.parent
>>> # the li_parent is the ordered list <ol>
...
>>>
Putting it all together(Github Trending Scraper)
>>> import urllib
>>> import bs4
>>>
>>> github_trending = urllib.request.urlopen("https://github.com/trending")
>>> trending_soup = bs4.BeautifulSoup(github_trending.read(), "lxml")
>>> ordered_list = trending_soup.find('ol')
>>> for each_list in ordered_list.find_all('li'):
...     repository_name = each_list.find('h3').text.strip()
...     repository_url = "https://github.com" + each_list.find('a')['href']
...     total_stars_today = each_list.find(attrs={'class':'float-sm-right'}).text
…        print(repository_name, repository_url, total_stars_today)

klauscfhq / taskbook                             https://github.com/klauscfhq/taskbook                             1,404 stars today
robinhood / faust                                https://github.com/robinhood/faust                                960 stars today
Avik-Jain / 100-Days-Of-ML-Code 	         https://github.com/Avik-Jain/100-Days-Of-ML-Code                  566 stars today
trekhleb / javascript-algorithms 	         https://github.com/trekhleb/javascript-algorithms                 431 stars today
jxnblk / mdx-deck 			         https://github.com/jxnblk/mdx-deck 	                           416 stars today
apexcharts / apexcharts.js 		         https://github.com/apexcharts/apexcharts.js 	                   411 stars today
faressoft / terminalizer 		         https://github.com/faressoft/terminalizer 	                   406 stars today
istio / istio 			                 https://github.com/istio/istio 	                           309 stars today
thedaviddias / Front-End-Performance-Checklist 	 https://github.com/thedaviddias/Front-End-Performance-Checklist   315 stars today
grain-lang / grain 			         https://github.com/grain-lang/grain 	                           301 stars today
boyerjohn / rapidstring 			 https://github.com/boyerjohn/rapidstring 	                   232 stars today
CyC2018 / Interview-Notebook 			 https://github.com/CyC2018/Interview-Notebook 	                   186 stars today
donnemartin / system-design-primer 		 https://github.com/donnemartin/system-design-primer 	           189 stars today
awslabs / aws-cdk 			         https://github.com/awslabs/aws-cdk 	                           186 stars today
fivethirtyeight / russian-troll-tweets 		 https://github.com/fivethirtyeight/russian-troll-tweets 	   159 stars today
GoogleChromeLabs / ndb 			         https://github.com/GoogleChromeLabs/ndb 	                   172 stars today
crossoverJie / Java-Interview 			 https://github.com/crossoverJie/Java-Interview 	           148 stars today
vuejs / vue 			                 https://github.com/vuejs/vue 	                                   137 stars today
Microsoft / vscode 			         https://github.com/Microsoft/vscode 	                           137 stars today
flutter / flutter 			         https://github.com/flutter/flutter 	                           132 stars today
QUANTAXIS / QUANTAXIS 			         https://github.com/QUANTAXIS/QUANTAXIS 	                   132 stars today
dylanbeattie / rockstar 			 https://github.com/dylanbeattie/rockstar 	                   130 stars today
tensorflow / tensorflow 			 https://github.com/tensorflow/tensorflow 	                   106 stars today
Snailclimb / Java-Guide 			 https://github.com/Snailclimb/Java-Guide 	                   111 stars today
WeTransfer / WeScan 			         https://github.com/WeTransfer/WeScan 	                           118 stars today


Enroll In 100 Off Courses At Udemy Automatically Python Codes To Get Paid Courses

- - Applications, Python, Tutorials
Udemy is a teaching and learning platform with loads of courses in various categories. Now very often different coupon codes are available for purchasing courses in minimal amount or with a 100% discount. Various websites serve these coupon codes. One of those websites which I rely on is GrowthCoupon.com

Now, I am not writing a review of 100% off coupon providers. Through this post I will explain my code which I am using to extract the 100% off coupon codes from growthcoupon.com and then get those courses automatically. I have automated my code so that I do not need to worry about new coupon codes available and can save my time. The below code enrolls you in 10 latest 100% off courses available at growthcoupon.com when run a single time. You may wish to automate the script every hour or so.

Get 100%off Udemy courses automatically using python

from json import loads
from bs4 import BeautifulSoup
import mechanize
api_key = "8def4868-509c-4f34-8667-f28684483810%3AS7obmNY1SsOfHLhP%2Fft6Z%2Fwc46x8B2W3BaHpa5aK2vJwy8VSTHvaPVuUpSLimHkn%2BLqSjT6NERzxqdvQ%2BpQfYA%3D%3D"
growth_coupon_url = "https://api.import.io/store/data/a5ef05a9-784e-410c-9f84-51e1e8ff413c/_query?input/webpage/url=http%3A%2F%2Fgrowthcoupon.com%2Fcoupon-category%2F100-discount%2F&_user=8def4868-509c-4f34-8667-f28684483810&_apikey=" + api_key
br = mechanize.Browser()
br.set_handle_robots(False)
br.addheaders = [("User-agent","Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.2.13) Gecko/20101206 Ubuntu/10.10 (maverick) Firefox/3.6.13")]
sign_in = br.open("https://www.udemy.com/join/login-popup/")
br.select_form(nr=3)
br["email"] = "email@domain.com"
br["password"] = "password"
logged_in = br.submit()

growth_coupon = br.open(growth_coupon_url)
json_obj = loads(growth_coupon.read())

for course_link in json_obj["results"]:
    try:
        course_page = br.open(str(course_link["couponcode_link"]))
        soup = BeautifulSoup(course_page)
        for link in soup.find_all("a"):
            req_link = link.get('href')
            if 'https://www.udemy.com/payment/checkout' in str(req_link):
                print req_link
                br.open(str(req_link))
                print "success"
                break
    except (mechanize.HTTPError,mechanize.URLError) as e:
        print e.code

The above program is a pure python code that extracts 10 latest 100% off coupon codes from GrowthCoupon.com and then enrolls you in those courses automatically.

1. Line 1 to 3

The first three lines are the import statements. In our program, we are using three python libraries. Amongst them, mechanize is used to login to the udemy account. BeautifulSoup is used to get the data on the basis of tags. Here in our program we use BeautifulSoup to get the links in certain page. Json’s loads is used to load the json response.

2. Line 4 and 5

We are using import.io API in order to extract data from growthcoupon. I got to know about this very cool resource in my Programming Synthesis class at my college. Here’s How to get and use import.io API. We store the API in a variable api_key. Then concatenate to the growth_coupon_url which is the standard post request url to get data in json format from growthcoupon.

3. Line 6 to 13

From line 6 to 13 is the procedure to login to a website (udemy in our case). Line 6 initializes a browser. Line 7 says ignore the robots.txt file. Line 8 adds a user agent to the browser. Line 9 opens the login url in the browser we initiated earlier.

The next thing you will need is the form you want to work with. By this I mean this is the login form. All you need to do is go to the username box ->> right click on it->> go to the inspect elements option. Now scroll up until you find the first form tag. In most cases you will find the form name attribute but some of the websites do not have this. If there exists then the value given to the name attribute under the form tag is the thing you need to access the form. Another way to access forms is by their index. The first form is indexed 0. Now in case the form name is not available, you will need to find how many forms are present in the login url(basically most of the websites have only one form because all you want the login page to do is login if authenticated). In this case the form index is 3.

Now you need to know the variable name that is assigned to take the value you enter to the email/username and password section. To get these values inspect element when you are inside the fields email/username and password. Below is a snapshot to give you insights of the variables you want to take care of.

udemyloginform

4. Line 15 and 16

Here on line 15, we are opening the url that gives us the data from growthcoupon in json format. Line 16 loads the url as a json object.

5. Line 18

Our json object is stored in json_obj variable. But the data we need is stored inside an array which is the value for the key “results” Hence we are iterating through this array.

6. Line 20

Now we open the couponcode link which is the value of the key “couponcode_link”. This url is present in each index of the array. On each loop the particular index’s url’s response is stored in variable course_page.

7. Line 21

We then convert the page response to a soup element by invoking the BeautifulSoup method over course_page.

8. Line 22 to 27

Now we want to iterate through each links found in the soup element. The url for enrolling in the udemy course starts with the string “https://www.udemy.com/payment/checkout”. Hence we check if the string is a substring of the link at each iteration. If the condition satisfies, we open that link to enroll ourselves in that course. Well that’s the end of the code that works.

Thanks for reading :) Enjoy ! If you have any questions regarding the codes/post, comment below.