Compiling My Year With An Interpreter – Python Programming

- - Applications, Python, Tutorials
To my dear readers, without whom this blog would have been a dead planet. Thanks for spending your valuable time here. I wish you a Happy New Year. 2015 is gone with almost an end to my sophomore year. I wouldn’t say year 2015 was any new to me than the others. Few achievements, life lessons and never ending learning curve. My very first article for 2016 will sum up my pythonic experience from 2015. I used some python throughout this semester/year to ease my life. Bear with novice’s automations.
 1. Enroll in 100% off courses at Udemy automatically.

from json import loads
from bs4 import BeautifulSoup
import mechanize
api_key = "8def4868-509c-4f34-8667-f28684483810%3AS7obmNY1SsOfHLhP%2Fft6Z%2Fwc46x8B2W3BaHpa5aK2vJwy8VSTHvaPVuUpSLimHkn%2BLqSjT6NERzxqdvQ%2BpQfYA%3D%3D"
growth_coupon_url = "" + api_key
br = mechanize.Browser()
br.addheaders = [("User-agent","Mozilla/5.0 (X11; U; Linux i686; en-US; rv: Gecko/20101206 Ubuntu/10.10 (maverick) Firefox/3.6.13")]
sign_in ="")
br["email"] = ""
br["password"] = "password"
logged_in = br.submit()

growth_coupon =
json_obj = loads(

for course_link in json_obj["results"]:
        course_page =["couponcode_link"]))
        soup = BeautifulSoup(course_page)
        for link in soup.find_all("a"):
            req_link = link.get('href')
            if '' in str(req_link):
                print req_link
                print "success"
    except (mechanize.HTTPError,mechanize.URLError) as e:
        print e.code

This has been my favorite automation throughout the semester. The program checks for 100% off coupon codes for paid courses offered at Udemy and enrolls me to those courses. I have uploaded the program to which allows me to have the script run daily(for free accounts) without me having to worry about it. At this time I have over 800 courses at my Udemy account, each courses on an average costs 75$.

2. Conversation between two cleverbots

import time
from selenium import webdriver
from selenium.webdriver.common.keys import Keys

browser = webdriver.Firefox()
browser2 = webdriver.Firefox()


input_for_bot = browser.find_element_by_name("stimulus")
input_for_bot2 = browser2.find_element_by_name("stimulus")

output_from_bot = ""
output_from_bot2 = "Hey, friend, what's up"

for i in range(0, 200):    
    output_from_bot = ""
    for elem in browser.find_elements_by_xpath('.//span[@class="bot"]'):
        output_from_bot = elem.text
    output_from_bot2 = ""
    for elem in browser2.find_elements_by_xpath('.//span[@class="bot"]'):
        output_from_bot2 = elem.text

This semester, I had to take Cognitive Science course. I enjoyed it. I had this assignment where I had to submit a page of conversation with cleverbot ( I submitted the assignment and later decided to bridge a conversation between two cleverbots. I used selenium module in python to have this done. It was great and kind of felt like an achievement.

3. Using pyautogui before my exams

import pyautogui
import time,450);pyautogui.typewrite('graphicsnotes');'enter')
for i in range(107):'right');'enter')

Being lazy to copy notes at class, I had to rely on photos of the notes sent by my friend. I discovered that the photos were all in landscape and were 107 pictures. I had come across pyautogui in A.I Sweigart’s course at Udemy and quickly wrote some 5-7 lines of code to open the picture and rotate the image and save it while I had my dinner. By the way I had no clue I had been enrolled in A.I Sweigart’s course until I opened my account to check which courses I had. All thanks to which runs my program to enroll to Udemy courses on a daily basis.

4. Automate signups and making apps at Dropbox for unlimited space

from selenium import webdriver
from selenium.webdriver.common.keys import Keys

browser= webdriver.Firefox()

list_of_inputs = browser.find_elements_by_xpath("//div/input[starts-with(@id, 'pyxl')]")

list_of_inputs[7].send_keys("first name")

list_of_inputs[8].send_keys("last name")





I had been involved with some people from IIT for some app ideas. We needed cloud space and agreed to use Dropbox. I had been given a bunch of emails with common password. I wrote a program in python to do the signups and later wrote a program to make apps at dropbox and get the API keys and secret keys to access the files in there. Unfortunately the project never continued for some reasons.

Overall, I had a good year but most of the time was spent at college and internship (doing mobile apps using Apache Cordova). For the new year, I will talk to my manager about switching to some python projects. My new year resolution would be to “write more codes and continue blogging about it.” Well that’s all I can think for now.

I would want some suggestions for my new year resolution. Do comment below. Once again A Happy New Year . 

Is It A WordPress Website Checker Script In Python

- - Applications, Python, Tutorials

By the end of this read, you will be able to code a program that will check for a number of domain names stored in a text file to verify whether or not a website is powered by wordpress. In this program we will write the result for each website in a google spreadsheet for later use but you can apply certain code if a website is wordpress powered at the mean time.

Python script to check if a bunch website is wordpress powered

Below is the program written in python programming language which reads a text file storing the names of the domain one per each line and checks if it a wordpress website and writes the status in a google spreadsheet.


from bs4 import BeautifulSoup
import mechanize
import gdata.spreadsheet.service
import datetime
rowdict = {}
rowdict['date'] = str(
spread_sheet_id = '1mvebf95F5c_QUCg4Oep_oRkDRe40QRDzVfAzt29Y_QA'
worksheet_id = 'od6'
client = gdata.spreadsheet.service.SpreadsheetsService()
client.debug = True = ""
client.password = 'password'
client.source = 'iswordpress'
br = mechanize.Browser()
br.addheaders = [("User-agent","Mozilla/5.0 (X11; U; Linux i686; en-US; rv: Gecko/20101206 Ubuntu/10.10 (maverick) Firefox/3.6.13")]
base_url ="")
with open('websitesforwpcheck.txt') as f:
    for line in f:
        rowdict['website'] = str(line)
        br["q"] = str(line)
        isitwp_response = br.submit()
        isitwp_response =
        if "Good news everyone" in a:
            rowdict['iswordpresswebsite'] = "yes"
            rowdict['iswordpresswebsite'] = "no"
        client.InsertRow(rowdict,spread_sheet_id, worksheet_id)

Isitwordpresswebsite code explanation

Before beginning , we need to create a spreadsheet and make three rows with names website, date, isitwordpresswebsite

1. Line 1 to 4

These are the import statements for the libraries we will use in our program. We use BeautifulSoup to convert a response from a request as a soup object. Mechanize is used in order to skip the length of code for sessions and cookies. Gdata is used to connect to our google account in order to access the spreadsheet we want to work with. Datetime is used to get the current date at the time of script run.

2. Line 5

In line 5, we create an empty dictionary where we later create a key:value pairs for date, website name and status of a website(is it wordpress or not). Also google spreadsheet accepts a dictionary item/json which is then written to the spreadsheet. In line 6 we store the current date at the time of script run to a key “date” which is later in our program pushed to the google spreadsheet.

3. Line 7 to 14

Now, before proceeding forward, we need to create a google spreadsheet. Now another step is to take a look at the url of the spreadsheet we created. We need the spreadsheet id to access the spreadsheet via our program. Below is a screenshot of how to get the spreadsheet id. The worksheet id is ‘od6‘ by default. Line 9 to 14 gets us logged in to our google account and accesses the spreadsheet we want to work with.

4. Line 15 to 18

In line 15, we use the mechanize module’s method to initiate a browser. Line 16 states to ignore the robots.txt file. Line 17 adds a user agent to the browser. Line 18 lets us open the website “” which we will be using to check if a website is wordpress powered or not.

5. Line 19

Line 19 allows us to open the text file where we have the names of the domain(one per each line).

6. Line 20 to 21

In line 20, we iterate through the lines of the file where we have stored domain names. At each iteration, the domain name at the current line is stored to the variable line. In line 21, we create a key:value pair where we store the domain name at each iteration to the key “website.

7. Line 22 to 30

Line 22 is a statement to select the form of index 0 i.e the first form present in the website Now value of the name attribute of the seachbox is “q” which is where we store the name of the domain before submitting the form. Line 24 submits the form after passing the value of domain name in the br[‘q’] and stores the response in a variable isitwp_response. Line 25 is the complete webpage response stored in variable isitwp_response. In line 26 we check if “Good news everyone” substring is present in the response which means the website is powered by wordpress else it is not. Line 27 then makes a key:value pair where the key “iswordpresswebsite” is given value “yes” if the condition on line 26 passed and “no” if the condition failed. Now remember the key of the dictionary rowdict must be the name of the row in our spreadsheet.

This way we can test if a website is wordpress powered for a number of website names stored in a text file(one at each line). Thanks for reading :) . If you have any questions regarding the article or the code, comment below so we can discuss on that part.

Cryptography With An Interpreter

- - Python, Technology, Tutorials, Web
Hey Guys, it’s been a long time since I published my last article. Apologies for the delay. Anyway, straight into the topic, Cryptography. Well, cryptography with python. This semester(V of Bsc CS) I choose Cryptography as an elective over Neural Network and I am enjoying it. So far I have learned Ceasar Cipher, Playfair Cipher, Vigenere Cipher, Vernom Cipher, Hill Cipher and Rail Fence Cipher. It’s just the beginning anyway. I have implemented these learnings via python programming language. Here is link to the github repository containing the implementations. The repo will be updated as I go further into learning throughout the semester.

Quick walk

Cryptography is the science that is used to hide the information. Mathematically defined as a tuple of five members (M, C, E, D, K) where,

M → Message (Plain Text)

C → Cipher Text

K → Set of Keys

E → Encryption Algorithm E: M*K → C

D → Decryption Algorithm D: C*K → M

On the other hand, cryptanalysis is the study of cipher systems with the view of finding the weaknesses in them that will enable us to retrieve the plain text.

Furthermore, ciphers can be classified into various types based on their properties. Well cipher in general is an algorithm to perform encryption and decryption. We could group Substitution and Transposition into Historical ciphers. Substitution cipher may further be monoalphabetic or polyalphabetic. Modern ciphers are either based on input data or based on key. Stream cipher and Block cipher are the types of modern cipher based on input data. Based on key there are symmetric(private) and asymmetric(public) ciphers.

Following are one of the implementations of ciphers I’ve learned so far. For complete package, follow this link to the repository.

Vernom Cipher

#sample plain text : hello
#sample key : axhjb

def make_chunks(text, text_length, key_length):
    for i in range(0, text_length, key_length):
        yield text[i : i + key_length]

def encryptdecrypt(cipher_generator):
    final_text = ""
    for item in cipher_generator:
        for i in range(0, len(item)):
            final_text += alphabets[alphabets.index(key[i]) ^ alphabets.index(item[i])]
    return final_text

alphabets = "abcdefghijklmnopqrstuvwxyz"

plain_text = raw_input("Enter the plain text: ")

key = raw_input("Enter the key: ")

plain_text = plain_text.replace(" ", "")

p_generator = make_chunks(plain_text, len(plain_text), len(key))

cipher_text = encryptdecrypt(p_generator)

print "The cipher text is : ", cipher_text

c_generator = make_chunks(cipher_text, len(cipher_text), len(key))

decrypted_text = encryptdecrypt(c_generator)

print "The decrypted text is : ", decrypted_text

Tell me how you felt the article was in the comments section below or shoot me a message at . And there is always thanks for reading. Cheers ????

Enroll In 100 Off Courses At Udemy Automatically Python Codes To Get Paid Courses

- - Applications, Python, Tutorials
Udemy is a teaching and learning platform with loads of courses in various categories. Now very often different coupon codes are available for purchasing courses in minimal amount or with a 100% discount. Various websites serve these coupon codes. One of those websites which I rely on is

Now, I am not writing a review of 100% off coupon providers. Through this post I will explain my code which I am using to extract the 100% off coupon codes from and then get those courses automatically. I have automated my code so that I do not need to worry about new coupon codes available and can save my time. The below code enrolls you in 10 latest 100% off courses available at when run a single time. You may wish to automate the script every hour or so.

Get 100%off Udemy courses automatically using python

from json import loads
from bs4 import BeautifulSoup
import mechanize
api_key = "8def4868-509c-4f34-8667-f28684483810%3AS7obmNY1SsOfHLhP%2Fft6Z%2Fwc46x8B2W3BaHpa5aK2vJwy8VSTHvaPVuUpSLimHkn%2BLqSjT6NERzxqdvQ%2BpQfYA%3D%3D"
growth_coupon_url = "" + api_key
br = mechanize.Browser()
br.addheaders = [("User-agent","Mozilla/5.0 (X11; U; Linux i686; en-US; rv: Gecko/20101206 Ubuntu/10.10 (maverick) Firefox/3.6.13")]
sign_in ="")
br["email"] = ""
br["password"] = "password"
logged_in = br.submit()

growth_coupon =
json_obj = loads(

for course_link in json_obj["results"]:
        course_page =["couponcode_link"]))
        soup = BeautifulSoup(course_page)
        for link in soup.find_all("a"):
            req_link = link.get('href')
            if '' in str(req_link):
                print req_link
                print "success"
    except (mechanize.HTTPError,mechanize.URLError) as e:
        print e.code

The above program is a pure python code that extracts 10 latest 100% off coupon codes from and then enrolls you in those courses automatically.

1. Line 1 to 3

The first three lines are the import statements. In our program, we are using three python libraries. Amongst them, mechanize is used to login to the udemy account. BeautifulSoup is used to get the data on the basis of tags. Here in our program we use BeautifulSoup to get the links in certain page. Json’s loads is used to load the json response.

2. Line 4 and 5

We are using API in order to extract data from growthcoupon. I got to know about this very cool resource in my Programming Synthesis class at my college. Here’s How to get and use API. We store the API in a variable api_key. Then concatenate to the growth_coupon_url which is the standard post request url to get data in json format from growthcoupon.

3. Line 6 to 13

From line 6 to 13 is the procedure to login to a website (udemy in our case). Line 6 initializes a browser. Line 7 says ignore the robots.txt file. Line 8 adds a user agent to the browser. Line 9 opens the login url in the browser we initiated earlier.

The next thing you will need is the form you want to work with. By this I mean this is the login form. All you need to do is go to the username box ->> right click on it->> go to the inspect elements option. Now scroll up until you find the first form tag. In most cases you will find the form name attribute but some of the websites do not have this. If there exists then the value given to the name attribute under the form tag is the thing you need to access the form. Another way to access forms is by their index. The first form is indexed 0. Now in case the form name is not available, you will need to find how many forms are present in the login url(basically most of the websites have only one form because all you want the login page to do is login if authenticated). In this case the form index is 3.

Now you need to know the variable name that is assigned to take the value you enter to the email/username and password section. To get these values inspect element when you are inside the fields email/username and password. Below is a snapshot to give you insights of the variables you want to take care of.


4. Line 15 and 16

Here on line 15, we are opening the url that gives us the data from growthcoupon in json format. Line 16 loads the url as a json object.

5. Line 18

Our json object is stored in json_obj variable. But the data we need is stored inside an array which is the value for the key “results” Hence we are iterating through this array.

6. Line 20

Now we open the couponcode link which is the value of the key “couponcode_link”. This url is present in each index of the array. On each loop the particular index’s url’s response is stored in variable course_page.

7. Line 21

We then convert the page response to a soup element by invoking the BeautifulSoup method over course_page.

8. Line 22 to 27

Now we want to iterate through each links found in the soup element. The url for enrolling in the udemy course starts with the string “”. Hence we check if the string is a substring of the link at each iteration. If the condition satisfies, we open that link to enroll ourselves in that course. Well that’s the end of the code that works.

Thanks for reading :) Enjoy ! If you have any questions regarding the codes/post, comment below.

How To Swap Values Without Temporary Variable

My dear readers, this article will be invested in explaining about swapping two integers without the use of temporary variable. The reason I am writing this article is because it has been asked several times in an interview for a CS internship/job. In fact, a friend of mine who applied for an internship at a local company was asked the same question. The problem tends to test your analytical skills as well as understanding of programming fundamentals. However, in practice one should never use the following approaches which I will explain why towards the end of the article. Starting with no further delay, there are two approaches to the problem. 1. Using XOR operator

2. Using addition and subtraction operators

Throughout the article we use ‘^’ symbol for representing XOR.

Below is a table showing the results of XOR operation

0 0 0
0 1 1
1 0 1
1 1 0

This concludes that for different value of A and B, the XOR operation gives 1 while it gives 0 for the same value of A and B.

XOR also termed as exclusive-or has following properties

1. Self – inversed

A ^ A = 0

Any value XOR’d with itself gives 0.

2. Identity

A ^ 0 = A

Any value XOR’d with 0 remains unchanged.

3. Associative

A ^ (B ^ C) = (A ^ B) ^ C

The order does not matter.

4. Commutative

A ^ B = B ^ A

How to swap two integers using XOR

It is simple and underlies on the above mentioned properties/principles.

Say for an instance a = 2 and b = 3

The XOR operation is performed between two corresponding bits of the numbers.

In binary 2 = 0010 and 3 = 0011

For instance, while performing a ^ b, the most significant bit of a is XOR’d with the most significant bit of b and so on.

The following three statements swaps values of a and b.

a = a ^ b

b = a ^ b

a = a ^ b

Congratulations, you’ve swapped values in a and b without the use of a temporary variable. How?

What going on in the first statement, lets see it in a table.

a b Final value of a = a ^ b
0 0 0 (from self-inversed)
0 0 0 (from self-inversed)
1 1 0 (from self-inversed)
0 1 1 (from Identity)

At the end of the first statement, the value of a = 0001 = 1

Now the value of a = a ^ b,

What’s going on in the second statement, lets see it in a table.

a = a ^ b b Final value of b = (a ^ b) ^ b
0 0 0 (from self-inversed)
0 0 0 (from self-inversed)
0 1 1 (from Identity)
1 1 0 (from self-inversed)

Voila, we’ve got the initial value of a I.e 2 to be held by b now. It worked but how? It’s nothing but the combination of self-inversed property followed by identity over the integers. For the second statement

b = a ^ b // After the first statement the value of a is equal to a ^ b, so

b = (a ^ b) ^ b

This can be rearranged by associative property and written as

b = a ^ (b ^ b)

We know from self-inversed rule, anything XOR’d with itself gives 0 . So we may now write

b = a ^ (0)

From Identity rule, we know something XOR’d to 0 will leave the number unchanged. Therefore we get

b = a

That’s how we got the initial value of a to be stored to b at the end of second statement.

Moving on to the third statement which is

a = a ^ b // By now the value of a is a ^ b and value of b is the initial value of a

so we may also write

a = a ^ b ^ a

Now this can be written as

a = a ^ a ^ b

Using self-inversed rule

a = 0 ^ b

Using identity rule

a = b

Through Table

A = a ^ b B = a A = a ^ b ^ a
0 0 0
0 0 0
0 1 1
1 0 1

At the end of the computation, we have b = 0010 = 2 and

a = 0011 = 3

Why you shouldn’t use XOR for swapping values?

For an instance, say

int a = 2

int *aptr = &a

int *bptr = &a

In the above set of statements, aptr and bptr are pointers to a. Using what we just learned

*aptr = *aptr ^ *bptr

*bptr = *aptr ^ *bptr

*aptr = *aptr ^ *bptr

Now since both the pointers are pointing to the number in the same location so all the above statements are making change at the same location whose result is data loss.

Swapping using addition and subtraction operators

a = a + b

b = a – b

a = a – b

This is pretty straight forward. Say for an instance

a = 5 and b = 10

Then the result of the first statement will be

a = a + b yields

a = 5 + 10

a = 15 and b is unchanged I.e b = 10

Now when the second statement is run, we have

b = a -b yields

b = 15 – 10

b = 5 (which is the initial value of a. Note that the value of a has already been changed to 15)

Running the third statement

a = a – b

a = 15 – 5

a = 10 (which is the initial value of b. Note that the value of b has already changed to 5)

So what do we have at the end

a = 10 and b = 5 (values swapped)

Why addition and subtraction shouldn’t be used to swap values

Say for an instance the values of a and b are large integers. We are performing addition on the first statement which can lead to overflow since we have large integers.

Should you have any questions, do comment below.

Finding Facebook Fanpage Of Startups Selenium And Facepy Usage

- - Applications, Python, Technology, Web
As a regular activity of the Software Club at my college, we have a weekly meetups where we discuss various ideas and code anything that is possible within an hour or so. We make groups of 3 or 4 and each group works on different ideas. At this point I really feel tech giants(Google, FB, etc…) should also consider colleges at Nepal and similar countries for their internship programs. Yesterday(Jan 3), we discussed several ideas and my group worked on something cool too. However, I will only talk about my portion.

In a nutshell, I extracted all the startups in Nepal and found their facebook page. The data was then used by other members in my group to do something cool which I can’t discuss here.

Extract startups in Nepal and find FB page

from selenium import webdriver
from facepy import GraphAPI
import json
import time 

startup_fan_pages = {}

access_token = "access_token"   # get it here

graph = GraphAPI(access_token)

browser = webdriver.Firefox()

time.sleep(40) #wait for the browser to completely load the page

startups = browser.find_elements_by_class_name("panel-title") #returns a list of objects having class="panel-title"
print("startups found")

for startup in startups:
    r =, "page", page=False, retry=3) #page=False is to refuse to get a generator, 
    if len(r['data']) > 0:
        startup_fan_pages[r['data'][0]['name']] = str(r['data'][0]['id'])

with open('startupsinnepalfanpages.json', 'w') as fp:
    json.dump(startup_fan_pages, fp) is a listing of all the startups in Nepal. I used selenium to extract all the startups from the website. In order to find their corresponding facebook fan page, I made use of facepy which allows an easy and quick way to make queries to the Graph API. All you need is the access token which you can get from

In the real implementation the data is stored in a google spreadsheet so it is available to the other portion of the program to do further computation. If you are interested on how to push data to spreadsheet via python, go ahead and read Grab alexa rank and write to google spreadsheet using python. Keep the comments coming and please don’t use adblockers(Adsense is the only source for this site’s income), it keeps me motivated to publish good content.

Sorting An Array Containing Json Elements With Specific Key In The Json Using Javascript

- - JavaScript, Tutorials, Web

Example of sorting an array containing json elements

The following function takes two parameter. The prior one expects an array with json elements within it while the later one expects a string i.e the key member of the json on whose basis the array is supposed to be sorted. For this example we would use “first_name” as the key for the basis of sorting. We use .toUpperCase() method to avoid the sorting based upon the small and big alphabetical letters.

We define a variable personal_infromations which holds an array of jsons. We alert the value stored in it  so that the proof of sorting being done is seen clearly. We store the value returned from sortByKey method to a variable sorted_personal_informations. Now when alerting the value, we see that the json has been sorted based upon the key “first_name”.

function sortByKey(array, key) {
    return array.sort(function(a, b) {
        var x = a[key].toUpperCase();
        var y = b[key].toUpperCase();
        return ((x < y) ? -1 : ((x > y) ? 1 : 0));
var personal_informations = [{"first_name":"Bhishan","last_name":"Bhandari","email":"","country":"Nepal","phone_number":"9849060230"},{"first_name":"Ankit","last_name":"Pradhan","email":"","country":"Nepal","phone_number":"9999999999"}, {"first_name":"Aalok","last_name":"Koirala","email":"","country":"Nepal","phone_number":"8888888888"}, {"first_name":"Subigya","last_name":"Nepal","email":"","country":"USA","phone_number":"6666666666"}];


var sorted_personal_informations = sortByKey(personal_informations, "first_name");