Fill Online Form Using Python

- - Uncategorized

Use mechanize module to fill an online form

By the end of this read, you will be able to fill up an online form using python. For this thing to be done, I would like to introduce you to a module “mechanize”. Mechanize is essentially designed for this purpose. One advantage while working with mechanize is- we need not worry about things like session handling.

Let’s start with github signup form itself.

Base url of the github page

Index of the form to fill-up

Index of a form is the count of forms that exists before it. A webpage may contain several forms. If the form you want to work with is present after two other forms while viewing the source code then the form index would be 2 (2 forms are before it). This is the case for sign up form in github.

Value of name attribute for each text entry box

On each text entry box, right click and inspect elements and look for the value of name attribute.

In case of github ::

  1. For username field

The username text entry area in github’s signup page is completely same to the above text entry area. Right click and inspect element to see the value assigned to the name attribute. All we want is the value assigned to the name attribute. In this case it is “user[login]”.

Similarly for the email address text field, the value of the name attribute is “user[email]”. And for the password entry field, the value of the name attribute is “user[password]”.

Thats all the information we need. The below code should sign up for a git account.

import mechanize #sudo pip install python-mechanize

br = mechanize.Browser() #initiating a browser

br.set_handle_robots(False) #ignore robots.txt

br.addheaders = [("User-agent","Mozilla/5.0")] #our identity

gitbot ="") #requesting the github base url

br.select_form(nr=2) #the sign up form in github is in third position(search and sign in

formscome before signup)

br["user[login]"] = username #username for github

br["user[email]"] = email #email for github

br["user[password]"] = password #password for github

sign_up = br.submit()


  1. To automate sign-up (Tip : Use a .txt file to store the username,email. Hardcode password in the program. Use a loop to sign-up for all in .txt file)
  2. Bruteforce attack
  3. Fetch google search results


  1. Exception handling is not considered (This may result in 503 error code(service unavailable) while making numerous requests to a website. Tip:looping over different user agents may help)

Finding Facebook Fanpage Of Startups Selenium And Facepy Usage

- - Uncategorized

As a regular activity of the Software Club at my college, we have a weekly meetups where we discuss various ideas and code anything that is possible within an hour or so. We make groups of 3 or 4 and each group works on different ideas. At this point I really feel tech giants(Google, FB, etc…) should also consider colleges at Nepal and similar countries for their internship programs. Yesterday(Jan 3), we discussed several ideas and my group worked on something cool too. However, I will only talk about my portion.

In a nutshell, I extracted all the startups in Nepal and found their facebook page. The data was then used by other members in my group to do something cool which I can’t discuss here.

Extract startups in Nepal and find FB page

from selenium import webdriver
from facepy import GraphAPI
import json
import time 

startup_fan_pages = {}

access_token = "access_token"   # get it here

graph = GraphAPI(access_token)

browser = webdriver.Firefox()

time.sleep(40) #wait for the browser to completely load the page

startups = browser.find_elements_by_class_name("panel-title") #returns a list of objects having class="panel-title"
print("startups found")

for startup in startups:
    r =, "page", page=False, retry=3) #page=False is to refuse to get a generator, 
    if len(r['data']) > 0:
        startup_fan_pages[r['data'][0]['name']] = str(r['data'][0]['id'])

with open('startupsinnepalfanpages.json', 'w') as fp:
    json.dump(startup_fan_pages, fp) is a listing of all the startups in Nepal. I used selenium to extract all the startups from the website. In order to find their corresponding facebook fan page, I made use of facepy which allows an easy and quick way to make queries to the Graph API. All you need is the access token which you can get from

In the real implementation the data is stored in a google spreadsheet so it is available to the other portion of the program to do further computation. If you are interested on how to push data to spreadsheet via python, go ahead and read Grab alexa rank and write to google spreadsheet using python. Keep the comments coming and please don’t use adblockers(Adsense is the only source for this site’s income), it keeps me motivated to publish good content :)


Automate Signups Using Python All About Pythons Selenium Module

- - Uncategorized

Recently I’ve been involved with some coolest people from IIT Bombay, IIT Delhi and other tech entrepreneurs from India in various projects. I am the youngest in the loop and it’s great to get involved in discussions with experts and computer wizards. In my article today, I will be displaying my part of contribution in a project which I really enjoyed doing. We will take a closer look at selenium module in python for automation while being in the premises of one of the rare websites making it almost impossible for automation, DropBox. DropBox has a mechanism of assigning dynamic id’s to elements on page load which confirms mechanize module can’t do the job for us. Therefore, we rely on selenium. Wait, there’s some more obstacles further. DropBox assigns id’s dynamically and these id’s are unique on every page load. By the end of this read, you will have a deeper understanding of selenium module and hopefully geeky way to tackle problems.

Python script for automating signups for DropBox


from selenium import webdriver
from selenium.webdriver.common.keys import Keys

browser= webdriver.Firefox()

list_of_inputs = browser.find_elements_by_xpath("//div/input[starts-with(@id, 'pyxl')]")

id_for_fname = str(list_of_inputs[7].get_attribute("id"))

id_for_lname = str(list_of_inputs[8].get_attribute("id"))

id_for_email = str(list_of_inputs[9].get_attribute("id"))

id_for_password = str(list_of_inputs[10].get_attribute("id"))

id_for_agreement = str(list_of_inputs[11].get_attribute("id"))

fname = browser.find_element_by_id(id_for_fname)
fname.send_keys("First Name")

lname = browser.find_element_by_id(id_for_lname)
lname.send_keys("Last Name")

email = browser.find_element_by_id(id_for_email)

password = browser.find_element_by_id(id_for_password)

agreement = browser.find_element_by_id(id_for_agreement)


Line to line breakdown – Python codes for automating sign ups using selenium python

Line 1 and 2

These are the import statements required for our program. webdriver is used to instantiate a browser as well as contains methods such as find_element_by_name and many more we will use later. Next, we import Keys to send keystrokes to the browser such as “RETURN”.

2. Line 4 and 5

Line 4 instantiates a firefox browser. For linux distro, we do not need to pass additional parameters to webdriver.Firefox() while it is necessary to pass the path to the browser in windows operating system. The get method on browser object opens the given parameter on the browser. The parameter to this method is essentially a web url.

3. Line 7

As mentioned earlier, dropbox has a mechanism to assign id’s to elements dynamically at load and the id is unique on each page load. If you closely look to the elements by inspecting them, you will notice that the id’s start with pyxl followed by numbers. The browser.find_elements_by_xpath(“//div/input[starts-with(@id, ‘pyxl’)]”) returns a list of input fields which is wrapped inside div tags and who’s id starts with ‘pyxl’ followed by any alphanumeric characters.

4. Line 9

The list we got from line 7 is of length 11 which contains different input fields while the one’s we need are in a sequence starting from index 7. The 7th index in the list is the input field for first name. Here the .get_attribute method takes a parameter of the attribute name (eg: id, name) to return the attribute of the current selenium element/object.

5. Line 11, 13, 15, 17

These are similar to the explanation on point 4. Reference back. These are the input fields for last name, email, password and a checkbox for license agreement.

6. Line 19 and 20

In line 19, we’ve used browser.find_element_by_id() method where we pass the id which we’ve got from in line 9. This selects the input element. Line 20 uses .sen_keys() method to pass the text or keystrokes to the input field(text in our case).

7. Line 22, 23 & 25,26 & 28,29 & 31,32

These are similar to what is explained in the point 6 above except for line 31 and 32. Well line 31 is the same but since the element selected is a checkbox we apply .click() method instead of .send_keys() to check mark the field.

8. Line 34

As mentioned above the .send_keys() method takes either text or keystrokes as defined inside Keys which we’ve imported in our program. For Linux and Windows the enter/return keystroke is passed via Keys.RETURN as a parameter to the .send_keys() method. Here we’ve selected password field and then called .send_keys() method on it. You can replace any input field instead. Just a way to get the signup form submitted in an easy way instead of clicking the submit button by looking for it.

Minified version of the program

Below is the minified version of the program reducing some statements which were intentionally added in the program above to improve the understanding of the selenium module.

from selenium import webdriver
from selenium.webdriver.common.keys import Keys

browser= webdriver.Firefox()

list_of_inputs = browser.find_elements_by_xpath("//div/input[starts-with(@id, 'pyxl')]")

list_of_inputs[7].send_keys("first name")

list_of_inputs[8].send_keys("last name")





We’ve used the above program to get enough could space at dropbox. Hopefully I will write my next on how to get DropBox file access API via programming. If you have any questions regarding the article or the codes, comment below so we can discuss about it. Also let me know how you felt the article was so I can come up with fresh and useful content every week.

Sorting An Array Containing Json Elements With Specific Key In The Json Using Javascript

- - Uncategorized

Example of sorting an array containing json elements

The following function takes two parameter. The prior one expects an array with json elements within it while the later one expects a string i.e the key member of the json on whose basis the array is supposed to be sorted. For this example we would use “first_name” as the key for the basis of sorting. We use .toUpperCase() method to avoid the sorting based upon the small and big alphabetical letters.

We define a variable personal_infromations which holds an array of jsons. We alert the value stored in it  so that the proof of sorting being done is seen clearly. We store the value returned from sortByKey method to a variable sorted_personal_informations. Now when alerting the value, we see that the json has been sorted based upon the key “first_name”.

function sortByKey(array, key) {
    return array.sort(function(a, b) {
        var x = a[key].toUpperCase();
        var y = b[key].toUpperCase();
        return ((x < y) ? -1 : ((x > y) ? 1 : 0));
var personal_informations = [{"first_name":"Bhishan","last_name":"Bhandari","email":"","country":"Nepal","phone_number":"9849060230"},{"first_name":"Ankit","last_name":"Pradhan","email":"","country":"Nepal","phone_number":"9999999999"}, {"first_name":"Aalok","last_name":"Koirala","email":"","country":"Nepal","phone_number":"8888888888"}, {"first_name":"Subigya","last_name":"Nepal","email":"","country":"USA","phone_number":"6666666666"}];


var sorted_personal_informations = sortByKey(personal_informations, "first_name");


All You Need To Know To Build Applications With Phonegap

- - Uncategorized

Warm greetings to my readers and a sincere apology for not being able to publish article the earlier week. The reason being the biggest App Camp in Nepal followed by college and internship. However, these are just excuses.

Getting into details of this article : NCELL APP CAMP is about building a product for mobile devices that scales and solves a major problem of the country or worldwide. Having options to compete in categories as follows :

1. Health

2. Tourism

3. Games & Entertainment

4. Utilities

We(group of 5 students) decided to make application to disseminate information and save lives of expected mothers. Idea selection is now complete with 150 top idea selected in various categories. We made it through the top 150(38 selected in Health Category). Needed to build MVP for the second round. You may watch the demo of the application below.

We had around 2 months for the development. We are students and we procrastinate whenever possible. Wait, we had one week remaining for the submission of the MVP with nothing done yet. Saying one week isn’t fair because reducing college time + internship time + assignments. Excuses again :p . Anyway, we had already done the task division. Two of my friends were to design the database and make API. Two others were to write about product, cost estimation, etc, etc (They required us to write a lot). I were to work on the mobile development. It was too late to produce a full fledged application therefore, we opted to make a prototype of the final product we were to have with no API. The product we had was just a prototype with no back-end service. We didn’t make it to the top 26. Procrastination pays good ???? .

I started with phonegap and jquery mobile. It’s easier to build multi-platform applications with phonegap and jquery mobile than it is to brew coffee. I mean it. Let me tell you, it’s a bunch of


$(“.class-name”).on('click', function(e){


$(“#id-name”).on(‘click’, function(e){


$(“#id-name”).html(“some html content”)


function nameOfFunction(params){




window.location.hash = “#id-name”;

window.localStorage.setItem(‘key’, ‘value’);


type: “GET” // “POST”,
url: “http://someapi”
data: data,
success: function(data){

}, error : function(e){


While you can make good working application with the above set of outlines. However, most of the time, you need to utilize the API of the OS. For example schedule a notification, send sms, etc. Don’t be surprised when I tell you it’s even easier to have those features in your application than it is to go to a barber to get your hair trimmed.

All you need, is to find a good plugin to talk to the OS’s API to perform things like sending sms via application, scheduling a local notification, etc. Comparably, plugins are the set of barbers with specialization in different hair styles. All you need to do is invoke a method they(plugins) provide at appropriate place like it is to go to a barber who knows how to trim your hair in specific style.

Below is the link to the github repository which contains the codes for the application prototype I built.

I hope you enjoyed the article. Do comment below to say how you felt the article was or share your view on it. Will come up with a new article another week. Till then Happy Coding ???? Good bye !

Using Import Io To Extract Data Easily Example Usage Of Import Io Api

- - Uncategorized

Hey Guys, here in this post I will explain about one the most reliable and easy to use extraction api provider which is an open source tool named

What is is an open source, web-based data extraction platform which enables general users to access data from websites without writing any code. also serves API for extraction of necessary data which makes it very essential for programmers needing dynamic data from certain websites or data sources. Using’s feature, data from websites can be queried easily either manually or with the use of an API. Using the service, the whole web can be considered as a big database for the machines and other applications to read the essential data which would definitely be more cumbersome while doing manually.

There can be numerous cases where a website does not provide API for data extraction. In cases like this, coding tactics to extract the right piece of information is possible but not very easy. Hence, is the best fit because it provides API to extract the dataset more easily.

Example use of’s API to extract data

1. Open up your browser and type in the following url to get to the coolest platform . SignUp there, it’s free.

2. For this example, we are using a webpage of Below is the url where the dataset we want is present.

3. Type in the url in the placeholder as shown below and press the Get Data button. Below is the similar screen-shot.


4. Now we get the dataset we were looking for. Below is the screenshot of how it looks. At the buttom right corner is a button Get API. Press the button to get the API of to extract the dataset. Now press copy this to my data.


5. You now are able to query the dataset in various formats including JSON and TSV. Click on the Get API tab to see the structure of the url for query the dataset.


6. At the end of the url is the parameter _apikey=YOUR API KEY. While using the url in your application you need to replace the YOUR API KEY with the alphanumeric api key provided by You can view this key by pressing on the parameter and typing your password for account.

Using API in code

Below is the piece of code I am using in my application to get the dynamic data from


function getCalenderInfoFromServer(calenderYear){
        url: baseUrlDefault + calenderYear + baseUrlEnd + baseAPI,
        type: 'GET',
        dataType: 'json',
        success: function (data) {
        error: function (e) {


Thanks for reading :)

How To Setup And Remotely Access A Linux Machine With Openssh Via Putty Exe

- - Uncategorized

Hello Readers. Here in this post, I will be explaining on how to setup the essentials accessing a device remotely from another device in some other network. Follow along :) 

To better understand the tutorial, read the background behind writing this article here

How to reserve an ip for your device?

I will be explaining on reserving an ip for a device via router’s settings. My router’s model is TL-WR740N, brand TP-Link. However the sole concept and procedure is similar for all the routers. First, go to your router’s settings, (in my case)

Now provide valid credentials for user and password. For most of the routers, the default username and password is admin

Click on the DHCP option, present in the menu in left side in case of TP-Link

Now go ahead and click on Address Reservation

Click Add New so you can reserve an ip address for a device. You will be required to enter the Mac Address of the device.

Open up terminal with the combination of keys Ctrl + Alt + T in your linux machine and enter the following command.


Since I connect via WI-FI, I look for Hwaddr under wlan0.

Go ahead and paste this mac address to the address reservation menu.

After this, enter a desired ip address, in my case Check enable and save the settings.

Once done, you will be required to reboot the router to save these settings. That’s all for a static local ip.

Setting up openssh-server and port Forwarding

It is dead simple to set up openssh server in linux machine. Follow along,

Enter the following command to get openssh-server

sudo apt-get install openssh-server

Now, to keep the openssh running, hit the following command

sudo service ssh start

The default port for ssh is 22

Another step is to forward the incoming traffic to my router’s public ip in port 22 to the local ip of the linux device. In my case to

For this, we again go to the router’s settings. Open up browser and go to or something similar to open the router’s settings.

Find the menu named Forwarding or something similar and click the option. Now under the sub-menu click on virtual servers.

Once done click Add New

Fill the details :

Service Port : 22

Internal Port : 22

IP Address: (the local ip of the machine with openssh running)

Protocol: All (TCP/UDP)

Status: Enabled

Go ahead and save it.

What have we done so far? Assigned a static ip to a device. Installed openssh-server in the linux machine and run it. Forwarded the incomming traffic to port 22 of the router to the local ip of the machine.

Another step. Based on your ISP, the public ip for your internet surfing may change but is static most of the time. In my case the ip barely changes. So all I have to do is rot the public ip which is what I did.

How to access a device remotely via ssh while on Windows machine

Prerequisite: The machine running openssh-server must be powered on. However it is not necessary to login. Also openssh-server remains in running state by default when device is powered on.

Download putty.exe from here. Putty is a telnet and ssh client for windows. The size of the software is very small and you need not install it. It’s a download and run software.

Once downloaded, run putty.exe. Enter the public ip of your home network, specify port 22 and check ssh then press open. This will open a terminal. If the connection was established meaning both the home network and the network you are at is connected to internet, you will be prompted for login credentials of your remote device. Providing the correct login credentials will give you access to your device at home network via command line.

Search for a file and send an email with attachment remotely via ssh and some python

Now that you have access to your linux machine remotely. You can use ls commands to find the location of the file you are looking for.

In my case, it’s present inside /home/bhishan/lispcognitive.odt

To send this file via email, we will be using python interactive shell.

Run the command to open python shell


We will be using two standard modules in python namely smtplib and email to send email with an attachment.

The snapshot shows the codes I used to send email

python codes to send email attachment

python codes to send email attachment

Below, I’ve nicely formatted the codes in a more readable way.


import smtplib
from email.mime.application import MIMEApplication
from email.mime.multipart import MIMEMultipart
from email.mine.text import MIMEText
from email.utils import formatdate

me = ""
you = ""
password = "password"
msg = MIMEMultipart(
    From = me,
    To = you,
    Date = formatdate(localtime=True),
    Subject = "Sending email with attachment via python"

s = smtplib.SMTP_SSL("")
s.login(me, password)
s.sendmail(me, [you], msg.as_string())

If you have any questions regarding the post or the codes, comment below, so we can discuss.

Mining google search result using python – Making SEO kit with python

- - Tutorials

whatpageofsearchamion program in python

By the end of this read, you will be able to make a fully functional SEO kit where you can feed the keywords associated with certain url and the program will show the position of the url in google search. If you know some tools like “”, our program will be similar to that tool.

I was writing for a trekking based website and I had to make sure my writeup appears in a good position in google search. The keywords being competitive, I had to keep track of day to day report of the article’s status. Therefore I decided to write a simple program that would find my article’s position in google search. Continue Reading

Mining Facebook – Mining the Social Web using python

- - Tutorials

Mining data from different sources has become a trend for the past few years. The need of structured and characteristic data has lead the data miners to advice their machines to mine social data. Twitter, Google + and Facebook are some of the social networks, now serving as a mountain of behavioral data. In this post Facebook will be our Zen source and python will be the miner. Python programming language is well known for it’s capability to withdraw web data effectively and efficiently. Cutting the long story short, today we will be using Facebook Graph API through python to mine the facebook page likes for each individual adjusting in our friend list. Continue Reading