Generate JSON From SQL Using Python

Let’s say you want to generate a few hundred — or even a thousand — flat JSON files from a SQL database. Maybe you want to power an interactive graphic but have neither the time nor the desire to spin up a server to dynamically generate the data. Or you think a server adds one more piece of unnecessary complexity and administrative headache. So, you want flat files, each one small for quick loading. And a lot of them.

A few lines of Python is all you need.

I’ve gone this route lately for a few data-driven interactives at USA TODAY, creating JSON files out of large data sets living in SQL Server. Python works well for this, with its JSON encoder/decoder offering a flexible set of tools for converting Python objects to JSON.

Here’s a brief tutorial:

1. If you haven’t already, install Python. Here’s my guide to setup on Windows 7; if you’re on Linux or Mac you should have it already.

2. In your Python script, import a database connector. This example uses pyodbc, which supports connections to SQL Server, MySQL, Microsoft Access and other databases. If you’re using PostgreSQL, try psycopg2.

3. Create a table or tables to query in your SQL database and write and test your query. In this example, I have a table called Students that has a few fields for each student. The query is simple:
 

SELECT ID, FirstName, LastName, Street, City, ST, Zip
FROM Students

4. Here’s an example script that generates two JSON files from that query. One file contains JSON row arrays, and the other JSON key-value objects. Below, we’ll walk through it step-by-step.

 

import pyodbc
import json
import collections
 
connstr = 'DRIVER={SQL Server};SERVER=ServerName;DATABASE=Test;'
conn = pyodbc.connect(connstr)
cursor = conn.cursor()
 
cursor.execute("""
            SELECT ID, FirstName, LastName, Street, City, ST, Zip
            FROM Students
            """)
 
rows = cursor.fetchall()
 
# Convert query to row arrays
 
rowarray_list = []
for row in rows:
    t = (row.ID, row.FirstName, row.LastName, row.Street, 
         row.City, row.ST, row.Zip)
    rowarray_list.append(t)
 
j = json.dumps(rowarray_list)
rowarrays_file = 'student_rowarrays.js'
f = open(rowarrays_file,'w')
print >> f, j
 
# Convert query to objects of key-value pairs
 
objects_list = []
for row in rows:
    d = collections.OrderedDict()
    d['id'] = row.ID
    d['FirstName'] = row.FirstName
    d['LastName'] = row.LastName
    d['Street'] = row.Street
    d['City'] = row.City
    d['ST'] = row.ST
    d['Zip'] = row.Zip
    objects_list.append(d)
 
j = json.dumps(objects_list)
objects_file = 'student_objects.js'
f = open(objects_file,'w')
print >> f, j
 
conn.close()

 
Let’s break this down. After our import statements, we set a connection string to the server. Then, we use pyodbc to open that connection and execute the query:
 

connstr = 'DRIVER={SQL Server};SERVER=ServerName;DATABASE=Test;'
conn = pyodbc.connect(connstr)
cursor = conn.cursor()
 
cursor.execute("""
            SELECT ID, FirstName, LastName, Street, City, ST, Zip
            FROM Students
            """)
 
rows = cursor.fetchall()

The script loads the query results into a list object called rows, which we can iterate through to do any number of things. In this case, we’ll build JSON.

At the top of the file, the script imports Python’s json module, which translates Python objects to JSON and vice-versa. Python lists and tuples become arrays while dictionaries become objects with key-value pairs.

In the first example, the script builds a list of tuples, with each row in the database becoming one tuple. Then, the json module’s “dumps” method is used to serialize the list of tuples to JSON, and we write to a file:
 

rowarray_list = []
for row in rows:
    t = (row.ID, row.FirstName, row.LastName, row.Street, 
         row.City, row.ST, row.Zip)
    rowarray_list.append(t)
 
j = json.dumps(rowarray_list)
rowarrays_file = 'student_rowarrays.js'
f = open(rowarrays_file,'w')
print >> f, j

The JSON result looks like this, with each tuple in a JSON array:
 

[
    [
        1,
        "Samantha",
        "Baker",
        "9 Main St.",
        "Hyde Park",
        "NY",
        "12538"
    ],
    [
        2,
        "Mark",
        "Salomon",
        "12 Destination Blvd.",
        "Highland",
        "NY",
        "12528"
    ]
]

That validates nicely over at JSONLint.com.

As a second example, the script next builds a list of dictionaries, with each row in the database becoming one dictionary and each field in the row a key-value pair:
 

objects_list = []
for row in rows:
    d = collections.OrderedDict()
    d['id'] = row.ID
    d['FirstName'] = row.FirstName
    d['LastName'] = row.LastName
    d['Street'] = row.Street
    d['City'] = row.City
    d['ST'] = row.ST
    d['Zip'] = row.Zip
    objects_list.append(d)
 
j = json.dumps(objects_list)
objects_file = 'student_objects.js'
f = open(objects_file,'w')
print >> f, j

You’ll note that I’m using Python’s OrderedDict() object from its collections class in place of a regular dictionary. While this is not necessary, I like to use it so I can force the order of dictionary keys to make the JSON more readable for fact-checking. Be sure to import collections at the beginning of your script.

Here’s the resulting JSON in valid objects:
 

[
    {
        "id": 1,
        "FirstName": "Samantha",
        "LastName": "Baker",
        "Street": "9 Main St.",
        "City": "Hyde Park",
        "ST": "NY",
        "Zip": "12538"
    },
    {
        "id": 2,
        "FirstName": "Mark",
        "LastName": "Salomon",
        "Street": "12 Destination Blvd.",
        "City": "Highland",
        "ST": "NY",
        "Zip": "12528"
    }
]

That’s it.

Using these simple building blocks, you can now construct basic or complex JSON output for one or one gazillion files. For example:

— Use pyodbc to query a list of students, then iterate over that list to output a single file on each student. This would be handy if you had a lot of data on each student and you wanted to keep each JSON file small for quick loading.
— You can add nested objects by executing queries for related records, building them into dictionaries and appending them to the output.

Very handy, and it runs quickly to boot.

48 Responses to “Generate JSON From SQL Using Python”

  1. tobi says:

    awesome, exactly what i needed

  2. James says:

    Anthony, I also wanted to chime in to say thanks for the great article.

  3. Anthony says:

    James, you’re very welcome! Glad this was useful.

  4. john says:

    Hi, I have a problem, every time i try to adapt your code (with rows received from sqlite) i get this error:

    AttributeError: ‘tuple’ object has no attribute ‘Nr”

    Can you help? thanks.

  5. Anthony says:

    I’d be glad to take a look. Can you post your code here in this thread?

  6. john says:

    sure, here it is:

    #!/usr/bin/python
    # -*- coding: utf-8 -*-
    
    
    import json
    import collections
    import sqlite3 as lite
    import sys
    
    
    con = lite.connect('evidenta.db')
    
    #with con:    
        
    cursor = con.cursor()    
    cursor.execute("SELECT nr,nume,prenume,data_nasterii FROM persoane")
    rows = cursor.fetchall()
    
    
    rowarray_list = []
    for row in rows:
        t = (row.nr, row.nume, row.prenume, row.data_nasterii)
        rowarray_list.append(t)
     
    j = json.dumps(rowarray_list)
    rowarrays_file = 'student_rowarrays.js'
    f = open(rowarrays_file,'w')
    print >> f, j

    thanks

  7. Anthony says:

    I believe the issue is that Python’s SQLite implementation references fields within a row object differently than pyodbc does. Instead of by name, you would call by index:

    t = (row[0], row[1], row[2], row[3])

    Depending on the field type, you may need to convert to string in order to serialize as JSON. i.e. str(row[0])

  8. john says:

    thank you. it seems to work without converting to string. Student_rowarrays.js was printed and it has json format. There was no other output to console so i suppose that is ok.

    I would have one more question if you have the time:

    To pass this information on to javascript I should simply reference this file (db.py) and it should run automatically? I tried this but it doesn’t work? Could you help? thanks.

    jQuery("#gridevidenta").jqGrid({
       	url:'db.py',
        datatype: "json",
        colNames:['Nr','Nume', 'Prenume', 'Data Nasterii'],
  9. Anthony says:

    John,

    You won’t be able to execute the Python script directly from jQuery. Instead, you’ll need to load the flat file the script creates.

    When I work on an interactive application that requires continual updates, I just set up a cron job to run the script as often as needed to generate and publish a new flat file.

  10. john says:

    thanks for all the help, i did place the flat file in the url string. it still doesn’t load anything.

    Either way, thanks for all the help. I’ll go dig a little bit deeper. I did hope I could call the script from jquery and get the info like that….

    Thanks for all the help. Ill go read tutorials :P

  11. When your post came through my RSS feed I made a mental note of it. I’m also wrestling with output from a MySQL database. Not trying to steal your thunder here, but I wanted to tip you off to an AWK script that does essentially the same thing via command line piping.

    See https://gist.github.com/3481798

    The data that I’m usually munging are the output of simple SELECT statements, but require table joins, roll-ups, and so forth. I’ll save the queries in scripts, incrementally tweaking them, and calling them from the command line to test results. An SQL script can generate a CSV file as follows:

    $ mysql -e "source myscript.sql" |sed 's/\t/,/g'

    I then wrote a script to convert CSV to JSON, using the column headers as field tags, but then iterated to take MySQL output directly:

    $ mysql -e "source myscript.sql" |awk -F "\t" -f tab2json.awk

    One caveat is that the enclosing array brackets of the JSON records are omitted, but these are easy enough to add after the fact.

  12. Anthony says:

    Jerry,

    Thanks for sharing that! I wasn’t familiar with AWK scripting before you mentioned it, so add that to the long list of things I need to try.

    One advantage I see to your approach is the de-coupling of the JSON serialization from the SQL script itself. That makes it reusable in a lot of situations.

    Sometimes, I wind up needing to create JSON to a spec given me by front-end developers, and the requirements include nested values. In those cases, I wind up using Python to make multiple queries and return rows as needed. I suspect you could write a script to do the same, just calling your AWK script as needed and piecing the JSON together.

  13. Everyone has their own workflow for the ETL steps. If I’m working with a MySQL database, I tend to push the E & T steps into the SQL — especially when these may involve lots of aggregation. The net result is that I get data in a simple, tabular format, making the L step easy. The script that I wrote fits into this workflow. If the required JSON output involved nested objects or lots of arrays, however, I would probably go with a heavy-duty solution like Python.

    As a language, Awk is old-old school: it is absent from GitHub’s drop-down menu for publishing gists. It’s a stream processing language, and can work magic when transforming data on the unix command line, especially when combined with other utilities. (Check out Mike Loukides’ “Data Hand Tools.”) I’ll can highly recommend O’Reilly’s “sed & awk” for more about the Awk programming language.

  14. Gery says:

    great post, one question, I’m getting this error message:

    >>> for row in rows:
    ... t = (row.id, row.fid, row.longitude, row.latitude, 
             row.w_depth_m, row.station, row.type, row.survey, 
             row.source, row.max_pen_m, row.core_rec_m, 
             row.t_g_ckm1, row.h_f_mWm2, row.comments, row.geom)
    ... rowarray_list.append(t)
      File "", line 3
        rowarray_list.append(t)
                    ^
    SyntaxError: invalid syntax

    do you know where could be the problem? thanks

  15. Anthony says:

    Gery,

    Without seeing your code in an editor, hard to say. But it looks like a possible indent problem. Make sure you’re indenting the second and third lines of your tuple.

  16. Gery says:

    thanks Anthony for the quick answer, here it’s the code:

    **********************************************************

    #!/usr/bin/python
    
    print "Content-type: text/html"
    print
    print ""
    print ""
    print ""
    
    import psycopg2
    import simplejson as json
    import collections
    
    conn = psycopg2.connect("dbname='mop' user='postgres' host='localhost' password='pass'")
    cur = conn.cursor()
    
    cur.execute("""
             SELECT id, fid, longitude, latitude, w_depth_m, station, 
             type, survey, source, max_pen_m, core_rec_m, t_g_ckm1, 
             h_f_mWm2, comments, st_asgeojson(geom) FROM boreholes_point_wgs84
             """)
    
    rows = cur.fetchall()
    print rows
    
    # Convert query to row arrays
    rowarray_list = []
    for row in rows:
        t = (row.id, row.fid, row.longitude, row.latitude, 
             row.w_depth_m, row.station, row.type, row.survey, 
             row.source, row.max_pen_m, row.core_rec_m, 
             row.t_g_ckm1, row.h_f_mWm2, row.comments, row.geom)
        rowarray_list.append(t)
    
    j = json.dumps(rowarray_list)
    rowarrays_file = 'boreholes_rowarrays.js'
    f = open(rowarrays_file,'w')
    print >> f, j
     
    # Convert query to objects of key-value pairs
    objects_list = []
    for row in rows:
        d = collections.OrderedDict()
        d['id'] = row.id
        d['fid'] = row.fid
        d['longitude'] = row.longitude
        d['latitude'] = row.latitude
        d['water depth [m]'] = row.w_depth_m
        d['station'] = row.station
        d['type'] = row.type
        d['survey'] = row.survey
        d['source'] = row.source
        d['maximum penetration [m]'] = row.max_pen_m
        d['core recovery [m]'] = row.core_rec_m
        d['thermal gradient [C/km]'] = row.t_g_ckm1
        d['heat flow [mWm2]'] = row.h_f_mWm2
        d['comments'] = row.comments
        d['geom'] = row.geom
        objects_list.append(d)
     
    j = json.dumps(objects_list)
    objects_file = 'boreholes_objects.js'
    f = open(objects_file,'w')
    print >> f, j
     
    conn.close()
    print ""

    **********************************************************

    the last row of the output rows from “print rows” is as follows:

    (51, 51, -80.583500000000001, -9.1120999999999999, -5086.8999999999996, ‘1230E’, ‘Research’, ‘Leg 201′, ‘ODP’, ’36’, ‘35.49’, ‘34.3’, ’28’, ‘The borehole was drilled in order to explore the distribution, activities, community structure, phylogenetic affinities, and global biogeochemical consequences of microbial communities buried in deep-sea sediments including the Peru coastal margin and Peru Basin.’, ‘{“type”:”Point”,”coordinates”:[-80.583500000000001,-9.1121,-5086.899999999999636]}’)]

    when I use the code, line by line, in python I get:

    **********************************************************

    Python 2.4.3 (#1, Jun 11 2009, 14:09:58) 
    [GCC 4.1.2 20080704 (Red Hat 4.1.2-44)] on linux2
    Type "help", "copyright", "credits" or "license" for more information.
    >>> import psycopg2
    >>> import simplejson as json
    >>> import collections
    >>> conn = psycopg2.connect("dbname='mop' user='postgres' host='localhost' password='pass'")
    >>> cur = conn.cursor()
    >>> cur.execute("""
    ...          SELECT id, fid, longitude, latitude, w_depth_m, station, type, survey, source, max_pen_m, core_rec_m, t_g_ckm1, h_f_mWm2, comments, st_asgeojson(geom) FROM boreholes_point_wgs84
    ...          """)
    >>> rows = cur.fetchall()
    >>> rowarray_list = []
    >>> for row in rows:
    ...     t = (row.id, row.fid, row.longitude, row.latitude, row.w_depth_m, row.station, row.type, row.survey, row.source, row.max_pen_m, row.core_rec_m, row.t_g_ckm1, row.h_f_mWm2, row.comments, row.geom)
    ...     rowarray_list.append(t)
    ... j = json.dumps(rowarray_list)
      File "", line 4
        j = json.dumps(rowarray_list)
        ^
    SyntaxError: invalid syntax
    >>> 

    **********************************************************

    if I run the script in my terminal I get:

    **********************************************************
    …, (51, 51, -80.583500000000001, -9.1120999999999999, -5086.8999999999996, ‘1230E’, ‘Research’, ‘Leg 201′, ‘ODP’, ’36’, ‘35.49’, ‘34.3’, ’28’, ‘The borehole was drilled in order to explore the distribution, activities, community structure, phylogenetic affinities, and global biogeochemical consequences of microbial communities buried in deep-sea sediments including the Peru coastal margin and Peru Basin.’, ‘{“type”:”Point”,”coordinates”:[-80.583500000000001,-9.1121,-5086.899999999999636]}’)]
    Traceback (most recent call last):
    File “test4.py”, line 26, in ?
    t = (row.id, row.fid, row.longitude, row.latitude, row.w_depth_m, row.station, row.type, row.survey, row.source, row.max_pen_m, row.core_rec_m, row.t_g_ckm1, row.h_f_mWm2, row.comments, row.geom)
    AttributeError: ‘tuple’ object has no attribute ‘id’
    **********************************************************

    what could be wrong? I’m running python2.4 in rhel (32-bits), thanks for your support =)

  17. Anthony says:

    Gery,

    Your last error — ‘tuple’ object has no attribute ‘id’ — has to do with the fact that psycopg2 works a little differently than pyodbc, which I use in my example post. I assume you’re using PostgreSQL as your backend?

    psycopg2 returns each row in your table as a tuple. Therefore, you would address each column in your row object by calling row[0], row[1], row[2], etc.

    So, try changing your statement to:

    t = (row[0], row[1], r[2], row[3] ... etc.)
  18. Gery says:

    cool Anthony, that solved the problem, you’re right, I’m using PostgreSQL as backend, I got now the first js file, now I get this error:

    Traceback (most recent call last):
      File "test4.py", line 37, in ?
        d = collections.OrderedDict()
    AttributeError: 'module' object has no attribute 'OrderedDict'

    I think it’s because the psycopg2 right? how could I fix this one?

    Thanks,

  19. Anthony says:

    Gery,

    That error is because OrderedDict was introduced in Python 2.7. You’re using 2.4 so it won’t work there. You should upgrade!

    http://docs.python.org/library/collections.html#collections.OrderedDict

  20. Gery says:

    thanks Anthony, time to upgrade =)

  21. Gery says:

    now it works well, thanks Anthony, you’re a heroe (y)

  22. Sergy says:

    Very nice! thank you very much for this example

  23. UnivCity says:

    Thanks for this example, Anthony. It’s clear and works well.

    I’m interested in creating rowarrays.js and objects.js flat json files for 7 queries.

    I am currently achieving this by using your example for the first query, then doing the conn.close(), and then opening a 2nd connection for the 2nd query, then again conn.close(), opening a 3rd connection for the 3rd query, so on and so forth – all in the one .py script.

    Is this the most effective way to iterate through multiple queries, or would something like multiple cursors be more effective?

    I wonder as you mention at the start of your example: Let’s say you want to generate a few hundred — or even a thousand — flat JSON files from a SQL database….

    Thanks for any tips or guidance

  24. Anthony says:

    UnivCity,

    The answer to your question, as usually happens with programming, is “it depends.”

    Usually when I generate multiple JSON files, I am using the same query over and over but limiting the query each time to a subset of the data via a WHERE clause. I create a list of values for the where clause and then iterate through the values.

    For example:

    connstr = 'DRIVER={SQL Server};SERVER=ServerName;DATABASE=Test;'
    conn = pyodbc.connect(connstr)
    cursor = conn.cursor()
    
    student_ids = [1, 19, 73, 221]
    
    for id in student_ids:
        cursor.execute(""" SELECT Name FROM students WHERE ID = ? """, id )
        rows = cursor.fetchall()
    
        Then build the JSON, output the file.
    
    conn.close()

    In complex situations, I’ll use several for loops to build nested elements as well.

    Hope that helps.

  25. Moodlemaster says:

    This is very useful post, thanks for this, it is definitely helpful to me,

    thanks again

  26. thornberryp says:

    Thanks for this post, very helpful for an in house project I’m working on.

  27. Anthony says:

    Glad to help!

  28. Azmath says:

    Thanks very much for your great tutorial,
    I have one request How do I get that JSON file to load in google visualization API. I just want to know how to call that.
    Cheers

  29. Anthony says:

    Sorry, Azmath, can’t help you there.

  30. utkarsh says:

    Hi Antony ,
    Right now i am getting json like this

    [{......}, {.....},{......}]

    Instead i wish to get it like this

    {"customers_details":[{......}, {.....},{......}]}

    All rows should get added to array of customer_details object.
    How can i achieve this ?
    Please Help

  31. Anthony says:

    Utkarsh,

    You can build JSON any way you’d like as long as you use the correct data structures. In the case of what you want, it’s a list as the value of a dictionary where the key is “customers_details”. So, build your list and add it to a dictionary.

  32. utkarsh says:

    Thanks Anthony ..that worked
    I created a map like row_map={}
    appended all records to list row_list.append(record)
    finally added this list as value to key ‘customer_details’
    row_map[‘customer_details’]=row_list

    I got one more query ..
    i want to update JSON not to overwrite it….
    becoz data might be coming from multiple database.
    how will i update only new entries and keep old one as it is.

    Thanks again

  33. Anthony says:

    Utkarsh,

    Hmm … that is a more structural question. When you say “new entries,” you could mean new customers or new information about a customer. In short, you’d need to iterate through your dictionaries and lists to find whether a record exists and if so update or insert or remove as needed. But I wonder whether that would be best handled on the database side. I realize you have data coming from multiple DBs so may not be practical.

    The scope of the question is a little beyond what I can offer here, unfortunately!

  34. David says:

    Great article, I already have a few tables wired up ! Thanks ! One question I had is a recommendation about the “datetime” values and how to deal with them. I had a similar problem with “Decimals” and got around that by setting up my SQL view to CAST my decimals as Floats. Is there a similar trick I can do for my datetime values to avoid the following message:
    “TypeError: datetime.datetime(2012, 8, 11, 0, 0) is not JSON serializable” Thanks again !

  35. Anthony says:

    David,

    If you want, for example, to have your dates appear in JSON as strings formatted in MM/DD/YYYY format, you can try:

    py_date = datetime.datetime(2012, 8, 11)
    d['date'] = str(py_date.strftime('%m/%d/%Y'))
  36. David says:

    Thanks for the prompt reply. I ended up finding a solution on the SQL Server side that others might find helpful:

    In my view, I had a datetime called LastReading that I changed to:
    CONVERT (nvarchar(30), LastReading, 126)

    this puts the field into a format like this: 2013-01-28T03:23:00 which I’ll be able to use for my issue.

    Thanks again and keep up the great work !

  37. Paul says:

    Is there a way to make what you provided applicable for use in the d3 treemap with the parent/children heirarchy?

  38. Anthony says:

    Paul,

    Potentially. I assume you’re referring to the JSON spec here:
    http://bl.ocks.org/mbostock/4063582

    It would really be a matter of querying your database to produce the Python lists and dictionaries that you want to convert, connecting those elements, and then passing them through json.dumps()

    Think of it as a string-building exercise with recursion.

  39. Ganesh says:

    Hi,
    I am new here, I need to generate JSON file from MSSQL.Can anyone help?. I installed Python 2.7 in my windows machine.

  40. dexter says:

    the code works..but it’s too verbose. this is a cleaner approach http://stackoverflow.com/questions/12270679/how-to-use-column-names-when-creating-json-object-python and nowadays it’s recommended to use mysql.connector

  41. Anthony says:

    Dexter,

    Thanks for the tip!

  42. rathi says:

    Hi

    I tried the above code in Python terminal window – Mysql – Also have made corresponding changes and imported libraries. But I still I get the error :

    module’ object has no attribute ‘dumps’

  43. Anthony says:

    Rathi,

    Did you import the json library?

  44. Rahul says:

    Hi Anthony,

    I am trying to work on an autosuggest box just like in google. For that I would be needing my database in JSON format. Can you tell me how to go forward without effecting my search queries which are brought by jQuery and Ajax.

    Program:

    #!"C:\Python27\python.exe"
    print "Content-Type : text/html"
    print """
    
    """
    
    import time
    start_time=time.time()
    import cgi
    import cgitb; cgitb.enable()
    import MySQLdb
    db=MySQLdb.connect('127.0.0.1','root','','search')
    cursor=db.cursor()
    form=cgi.FieldStorage()
    message=form.getvalue("Title"," ")
    
    sql="SELECT word FROM `englishwords` WHERE `word` LIKE "+"'"+str(message)+"%'"
    cursor.execute(sql)
    data=cursor.fetchall()
    
    
    
    for dat in data:
        print """
    
             %s
    
             """ %str(dat[0])
    
    print time.time()-start_time,"seconds"
    print """
    
    """
    -This is code for database connection which contains queries.
    
    #!"C:\Python27\python.exe"
    print "Content-Type : text/html"
    print """
    
    
    
    		function ajaxFunction()
    		{
    		  document.getElementById("myDiv").innerHTML=document.getElementById('Title').value;
    		var xmlhttp;
    		if (window.XMLHttpRequest)
    		  {// code for IE7+, Firefox, Chrome, Opera, Safari
    		  xmlhttp=new XMLHttpRequest();
    		  }
    		else
    		  {// code for IE6, IE5
    		  xmlhttp=new ActiveXObject("Microsoft.XMLHTTP");
    		  }
    		xmlhttp.onreadystatechange=function()
    		  {
    		  if (xmlhttp.readyState==4 && xmlhttp.status==200)
    		    {
    		    document.getElementById("myDiv").innerHTML=xmlhttp.responseText;
    		    }
    		  }
    		var search = document.getElementById('Title').value;
    		var queryString = "?Title="+search;
    		xmlhttp.open("GET","search1.py"+queryString,true);
    		xmlhttp.send();
    		
    		}
    		
    
    
    
    
    
        
         
    Click  
      """
    

    – This is for the ajax calls.

  45. Rahul says:

    And also, should the “student_rowarrays.js” file be created by us? If yes, should it be blank?

  46. SiteKickr says:

    Thanks! I built on your code above to produce a dynamic version that will automatically handle the fields in the table:

    http://www.sitekickr.com/snippets/zope-python/json-serialize-result-set-object

Leave a Reply