• Status: Solved
  • Priority: Medium
  • Security: Public
  • Views: 649
  • Last Modified:

download html page from web, using python - like a browser, bypass reblaze, explanation in body

I'm trying to download html of a given web page - using python 2.7, on ubuntu.
I succeeded doing so for most of the web pages I saw, using several methods, such as using urllib3.

BUT, I failed to download the html of:

If I'm opening the page in my browser, then I can download the page using my code for few minutes.
after a few minutes, I can no more download the html page, and starting to get:

HTTPHeaderDict({'content-length': '616', 'expires': 'Tue, 24 Feb 2015 21:12:17 GMT', 'pagespeed': 'off', 'server': 'Reblaze Secure Web Gateway', 'connection': 'keep-alive', 'x-ua-compatible': 'IE=EmulateIE8', 'cache-control': 'private, no-cache, no-store, no-transform', 'date': 'Tue, 24 Feb 2015 21:12:17 GMT', 'x-cdn': 'Akamai', 'p3p': 'CP="IDC DSP COR ADM DEVi TAIi PSA PSD IVAi IVDi CONi HIS OUR IND CNT"', 'content-type': 'text/html; charset=utf-8'})

<html><head><meta charset="utf-8"></head><body><script src="//d1a702rd0dylue.cloudfront.net/js/iealml-03/3600.js"></script><script>window.rbzns = {}; rbzns.hosts="www.mako.co.il mako.co.il"; rbzns.ctrbg="dVa9rce47U+iuusPxpSoG2zKw2PX1p1wpNsKpeo92FVY8m3Rww27b3eDes1IrdG2XG0sBBFooJqpNad4cFnt/fwvNznkniELGLpI0nurISYw1/qvHNtj+vAKZVCEcPcWbuWz2cEkppGJoNkMl3LNK2hv5QHSCYPLt78wQnMRLmk=";rbzns.rbzreqid="rbz-mako-reblazer0531343232313035323932bdaed4e40029eed1"; winsocks(true);</script></body></html>

Here is my code:

user_agent = {'user-agent': 'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/39.0.2171.99 Safari/537.36',
                              'connection': 'keep-alive',
                              'accept-encoding': 'gzip, deflate, sdch',
                              'accept-language': 'en-US,en;q=0.8,he;q=0.6,he-IL;q=0.4',
                              'accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8'}
                http = urllib3.PoolManager(10, headers=user_agent)
                r = http.request('GET', url)

How can I always download the page using only python code?

omer d
omer d
  • 2
  • 2
1 Solution
Dave BaldwinFixer of ProblemsCommented:
You can't.  The HTML you posted contain two javascripts which you can't run with Python and it redirects from the 'cloud' provider to the actual web content.

'Akamai' is a CDN Content Delivery Network that delivers web content for their clients.
omer dAuthor Commented:
and can I do it by code using other language?
cause I can access the page using regular browser..
Dave BaldwinFixer of ProblemsCommented:
Javascript is built into all browsers.  I don't know of any other programming language that will also run that javascript.
There's a spidermonkey interface (https://pypi.python.org/pypi/python-spidermonkey) available for python which will let you fire up and pass objects to a javascript interpreter.   So you actually can do what you're after.  It's slightly complicated by the fact that the javascript that they're using is obfuscated and also that you don't have a DOM but their javascript isn't so bad that it's not fairly easily hacked to do what you're after--  retrieve the cookie it needs to access the page.

The following is working for me to pull the ingredients:
from bs4 import BeautifulSoup
import requests
import re
import spidermonkey

class RecipeGetter(object):
    def __init__(self, url):
        self.url = url
        s = requests.Session()
        r = s.get(self.url)
        self.html = r.text
        cookies = dict()
        if self.html.find('3600.js') > 0:
            cookies['rbzreqid'], cookies['rbzid'] = self.getRbzid(self.html)
            r = s.get(self.url, cookies=cookies)
            self.html = r.text
        self.soup = BeautifulSoup(self.html)

    def getRbzid(self, page):
        rbzreqid = re.search('(rbz-mako-reblazer.*?)"', page).group(1)
        soup = BeautifulSoup(page)
        script = soup.find_all('script')[1].text
        script = re.sub('(window\.)?rbzns','window.rbzns', script)
        script = re.sub('winsocks', 'window.winsocks', script)
        rt = spidermonkey.Runtime()
        cx = rt.new_context()
        window = {"document": {
                        "documentElement": {
                            "scrollLeft": ""
                   "screen": {
                         "width": 1920,
                         "height": 1080,
                         "availHeight": 1000,
                         "availWidth": 1000
                   "navigator": {
                          "userAgent": "" 
        cx.add_global("window", window)
        jscript = open('3601.js', 'r').read()
        jscript = jscript + script
        cookie = window['retval']
        match = re.search('rbzid=(.*?);', cookie)
        if match:
            return rbzreqid, match.group(1)
        return '',''

    def getIngredients(self):
        for ingredient in self.soup.find_all('li',itemprop='ingredient'):
            yield ingredient.span.text

def main():
    url = 'http://www.mako.co.il/food-recipes/recipes_column-fish-seafood/Recipe-9e6645ebcd35b41006.htm'
    r = RecipeGetter(url)
    for ingredient in r.getIngredients():
        print ingredient

if __name__ == '__main__':

Open in new window

Also attached is a hacked up version of the script that they're using to build the cookie your request will need.  The above python script expects it to be called 3601.js and live in the same directory that the python script is in.

Anyway.  It's working for me.  Installing python-spidermonkey is a little more complex than your standard python pip install but if you're on linux it's not that tough.  Windows would be a bit tougher.  You'd probably need to go with cygwin and cygwin's python.
omer dAuthor Commented:
WOW, what a great answer!!!!!! thank you :)
Question has a verified solution.

Are you are experiencing a similar issue? Get a personalized answer when you ask a related question.

Have a better answer? Share it in a comment.

Join & Write a Comment

Featured Post

Cloud Class® Course: CompTIA Cloud+

The CompTIA Cloud+ Basic training course will teach you about cloud concepts and models, data storage, networking, and network infrastructure.

  • 2
  • 2
Tackle projects and never again get stuck behind a technical roadblock.
Join Now