App launch: 2014 elections forecast

Election Forecast

 

With more than 1,100 candidates, 507 races, top-line campaign finance data and poll averages for select races, the 2014 midterm elections forecast app we launched in early September is probably the most complex mash-up of data, APIs and home-grown content built yet by our Interactive Applications team at Gannett Digital.

We’re happy with the results — even more because the app is live not only at USA TODAY’s desktop and mobile websites but across Gannett. With the rollout of a company-wide web framework this year, we’re able to publish simultaneously to sites ranging from the Indianapolis Star to my alma mater, The Poughkeepsie Journal.

What’s in the forecast? Every U.S. House and Senate race plus the 36 gubernatorial races up in November with bios, photos, total receipts and current poll averages. For each race, USA TODAY’s politics team weighed in on a forecast for how it will likely swing in November. Check out the Iowa Senate for an example of a race detail page.

Finally, depending on whether you open the app with a desktop, tablet or phone, you’ll get a version specifically designed for that device. Mobile-first was our guiding principle.

Building the backend

This was a complex project with heavy lifts both on design/development and data/backend coding. As usual, I handled the data/server side for our team with assists from Sarah Frostenson.

As source data, I used three APIs plus home-grown content:

— The Project Vote Smart API supplies all the candidate names, party affiliations and professional, educational and political experience. Most of the photos are via Vote Smart, though we supplemented where missing.

— The Sunlight Foundation’s Realtime Influence Explorer API supplies total receipts for House and Senate candidates via the Federal Election Commission.

— From Real Clear Politics, we’re fetching polling averages and projections for the House (USAT’s politics team is providing governor and Senate projections).

The route from APIs to the JSON files that fuel the Backbone.js-powered app goes something like this:

  1. Python scrapers fetch data into Postgres, running on an Amazon EC2 Linux box.
  2. A basic Django app lets the USAT politics team write race summaries, projections and other text. Postgres is the DB here also.
  3. Python scripts query Postgres and spits out the JSON files, combining all the data for various views.
  4. We upload those files to a cached file server, so we’re never dynamically hitting a database.

Meanwhile, at the front

Front-end work was a mix of data-viz and app framework lifting. For the maps and balance-of-power bars, Maureen Linke (now at AP) and Amanda Kirby used D3.js. Getting data viz to work well across mobile and desktop is a chore, and Amanda in particular spent a chunk of time getting the polling and campaign finance bar charts to flex well across platforms.

For the app itself, Jon Dang and Rob Berthold — working from a design by Kristin DeRamus — used Backbone.js for URL routing and views. Rob also wrote a custom search tool to let readers quickly find candidates. Everything then was loaded into a basic template in our company CMS.

This one featured a lot of moving parts, and anyone who’s done elections knows there always are the edge cases that make life interesting. In the end, though. I’m proud of what we pulled off — and really happy to serve readers valuable info to help them decide at the polls in November.

NICAR ’14: Getting Started With Python

For a hands-on intro to Python at IRE’s 2014 NICAR conference, I put together a Github repo with code snippets just for beginners.

Find it here: https://github.com/anthonydb/python-get-started

For more Python snippets I’ve found useful, see:
https://github.com/anthonydb/python-snippets

Finally, if you’d like an even deeper dive, check out journalist-coder Tom Meagher’s repository for the Python mini bootcamp held at this year’s conference.

Thanks to everyone who showed up!

Setting up Python in Windows 8.1

One of my family members recently acquired a Windows 8.1 laptop, and I was curious as to whether Python setup was as easy as when I wrote about installing it on Windows 7. Turns out, it is — and not much different. Which could spawn a whole conversation about Windows OS development, but that’s for another day …

Here’s your quick guide, modified from my earlier Win 7 post:

Set up Python on Windows 8.1

1. Visit the official Python download page and grab the Windows installer. Choose the 32-bit version. A 64-bit version is available, but there are compatibility issues with some modules you may want to install later.

Note: Python currently exists in two versions, the older 2.x series and newer 3.x series (for a discussion of the differences, see this). This tutorial focuses on the 2.x series.

2. Run the installer and accept all the default settings, including the “C:\Python27″ directory it creates.

Continue…

FTP with Python: Quick file stashing

Recently at work, our team[1] built an app to track Atlantic and Pacific tropical storms during hurricane season. To keep the map current — and avoid manual processing — we run a Python script at regular intervals to check shapefiles at the National Hurricane Center and convert them to GeoJSON. After conversion, we push the GeoJSON to our cached storage environment. Using flat files keeps things fast and tidy.

To publish the files, we ship via FTP. A colleague told me about Python’s built-in FTP support, and I was really happy with how easy it was for me to write a script to handle the task.

Here’s a sample script that logs into an FTP server and uploads all files in a local directory. Grab it, along with other useful bits, at my python-snippets repository on GitHub.

# Store all files in a local directory to a server
# using FTP
 
from ftplib import FTP
import os
 
# ftp settings
settings = {
    'ftp': {
        'url': 'ftp.some-server.com',
        'username': 'your-account-name',
        'password': 'your-password',
        'remote-directory': '/path/to/files'
    }
}
 
# local paths
paths = {
   'local-directory': 'my-files/'
}
 
# list of local files
files = os.listdir(paths['local-directory'])
 
# connect and store
for f in files:
    ftp = FTP(settings['ftp']['url'])
    ftp.login(settings['ftp']['username'], settings['ftp']['password'])
    ftp.cwd(settings['ftp']['remote-directory'])
    ftp.storbinary('STOR ' + f, open(paths['local-directory'] + f, 'rb'))
    ftp.close()

[1] Python hackery by Andrea Fuller, mapping by Juan Thomassie, design/production direction by Kristin DeRamus and Maureen Linke. I rode devops/producer this time around.

Python: Pull Twitter, Facebook User Data

Updated July 4, 2013: The Twitter portion of this post has been revised to reflect Twitter’s retirement of v1 of its API and its move to v1.1. Since writing the original, I also discovered the excellent python-twitter library, which is extremely useful for handling the now-required authentication with the API and also providing a wrapper around the API itself.

Original post (updated):

The APIs offered by the two social media giants, Twitter and Facebook, offer plenty of possibilities for data gathering and analysis. From tweets and status messages to numbers of followers and friends, photos, locations and more, there’s a lot of information waiting.

Given my nascent interest in Python, I decided to explore the APIs via some simple scripts, fetching Twitter profile and Facebook page data and writing the values to a SQLite database.

These examples are simple but offer a framework for you (and me) to build upon. SQLite support is built into Python, but for the Facebook script you must install the Requests library if you don’t have it.

Facebook Page Data

This script (available on Github) pulls the number of “likes” and “talking about this” for each Facebook page specified in the list called names_list. It creates a SQLite database called social_data.db if none exists and also creates a table to hold the data.

# Fetch Facebook page metrics via Social Graph API into a SQLite DB
# Grabs the number of likes and "talking about" numbers
 
import requests
import sqlite3
import os
from datetime import datetime
 
# These are the accounts for which you will fetch data
names_list = [
    'fallingskies',
    'usatoday'
]
 
# API base URL
base_url = 'https://graph.facebook.com/'
 
# Function to add row to accounts table
def insert_db(handle, likes, talking):
    conn = sqlite3.connect('social_data.db')
    cur = conn.cursor()
    cur.execute('''
        INSERT INTO fbaccounts VALUES (?,?,?,?);
        ''', (datetime.now(), handle, likes, talking))
    conn.commit()
    conn.close()
 
# Create the database if it doesn't exist
if not os.path.exists('social_data.db'):
    conn = sqlite3.connect('social_data.db')
    conn.close()
else:
    pass
 
# Create the table if it's not in the db
conn = sqlite3.connect('social_data.db')
cur = conn.cursor()
cur.execute('''CREATE TABLE IF NOT EXISTS fbaccounts 
    (FetchDate Date, Handle Text, Likes Integer, Talking Integer)
    ''')
conn.commit()
conn.close()
 
# Iterate over handles and hit the API with each
for user in names_list:
    url = base_url + user 
    print 'Fetching ' + user
    response = requests.get(url)
    profile = response.json()
    handle = profile['name']
    likes = profile['likes']
    talking = profile['talking_about_count']
    insert_db(handle, likes, talking)

Twitter profile data

This script (also on Github) uses the python-twitter library to fetch some basic profile data — screen name, followers and description — into a SQLite DB. To get the keys and access tokens required now by version 1.1 of the Twitter API, you’ll need to register an application under your profile. Start at https://dev.twitter.com/
Continue…

Get JSON from Excel using Python, xlrd

Powering interactive news applications off flat files rather than a call to a database server is an option worth considering. Cutting a production database and data access layer out of the mix eliminates a whole slice of complexity and trims development time. Flat files aren’t right for every situation, but for small apps they’re often all you need.

These days, most of the apps I help build at Gannett Digital consume JSON. Simpler apps — such as the table/modal displays we deployed in February for our Oscar Scorecard and Princeton Review Best Value Colleges — run off one or two JSON files. The more complex — e.g., last year’s Ghost Factories: Poison in the Ground — run off hundreds of them. Updating content is as easy as generating new JSON and uploading it to our file server.

I wrote last year how to use Python to generate JSON files from a SQL database. Lately, I’ve been using Python to make JSON out of Excel spreadsheets.

The key ingredient is the Python library xlrd. It’s part of a suite of Excel-related tools available from www.python-excel.org, including the xlwt library to create Excel files.

(Another choice is openpyxl, which has similar features and works with newer .xlsx formatted Excel files. I’ve used both with equal success. Take your pick.)

Basic xlrd operations

Let’s say we have an Excel workbook containing a small table repeated over three worksheets. The table in each case looks like this:

table

Here are some snippets of code — just scratching the surface — to interact with it programmatically:

# Demonstrates basic xlrd functions for working with Excel files
# (Excel 97-2003)
 
import xlrd
 
# Open the workbook
wb = xlrd.open_workbook('excel-xlrd-sample.xls')
 
# Print the sheet names
print wb.sheet_names()
 
# Get the first sheet either by index or by name
sh = wb.sheet_by_index(0)
 
# Iterate through rows, returning each as a list that you can index:
for rownum in range(sh.nrows):
    print sh.row_values(rownum)
 
# If you just want the first column:
first_column = sh.col_values(0)
print first_column
 
# Index individual cells:
cell_c4 = sh.cell(3, 2).value
# Or you can use:
#cell_c4 = sh.cell(rowx=3, colx=2).value
print cell_c4
 
# Let's say you want the same cell from x identical sheets in a workbook:
x = 2
while x >= 0:
    sh = wb.sheet_by_index(x)
    cell_x = sh.cell(2, 3).value
    print cell_x
    x = x - 1

From Excel to JSON

Pretty cool stuff. Now, let’s convert our sample spreadsheet to JSON. I’ll borrow some of the techniques I discussed when outlining how to use Python to build JSON from a SQL database:

import xlrd
from collections import OrderedDict
import simplejson as json
 
# Open the workbook and select the first worksheet
wb = xlrd.open_workbook('excel-xlrd-sample.xls')
sh = wb.sheet_by_index(0)
 
# List to hold dictionaries
cars_list = []
 
# Iterate through each row in worksheet and fetch values into dict
for rownum in range(1, sh.nrows):
    cars = OrderedDict()
    row_values = sh.row_values(rownum)
    cars['car-id'] = row_values[0]
    cars['make'] = row_values[1]
    cars['model'] = row_values[2]
    cars['miles'] = row_values[3]
 
    cars_list.append(cars)
 
# Serialize the list of dicts to JSON
j = json.dumps(cars_list)
 
# Write to file
with open('data.json', 'w') as f:
    f.write(j)

Here’s the breakdown: We open the workbook, select the sheet and iterate through the available rows (which xlrd conveniently counts using its nrows method).

Add each cell to a key/value pair in a dictionary, then add each dictionary to a list. Dump the list to JSON and write to a file.

Of course, a spreadsheet this simple doesn’t need a Python script to make its way to JSON. Just use Mr. Data Converter for something like this. But as soon as your JSON requirements gain complexity, the ability to use Python to nest nodes, build strings and transform data on the fly make this approach very appealing.

Catch the Cat: A Simple Python Game

New cat in the house? Check.

Son interested in programming? Check.

Dad eager to learn more Python? Check.

Add it up and you get a little project the two of us worked up over a couple of nights this summer — a Python-based text game called Catch the Cat. You can download the code from its repository on Github. To use it, from a command prompt run:
 

python catchthecat.py

The game’s simple: The cat’s on the loose in the house, and you have to catch it. Every time you move to a new room, the cat moves too — at random. To catch the cat, you both have to arrive in the same place.

OK, so we’re not going to unseat EA Sports here, but we met our goals: we had fun and learned some tricks. While I was using the exercise to help my son work through the logic choices required in game play, I got to add few things to my Python toolkit:

— Using Python’s SQLite integration to build a leaderboard database.

— Writing classes for the cat and player objects.

Have you written Python-based games? Suggestions for improvement here?

12 Tangents Later, I Publish a Django Site

Last week, I deployed my first live Django app. Time from start to finish: three years.

Cue the sound of snickers and a thousand eye-rolls. Go ahead. But I confess: From the moment I said, “I want to build something using Django” to the moment I restarted Apache on my WebFaction server and watched the site load for real in my browser, 36 months passed through the hourglass of time.

You see, I got diverted along the way. I’ll tell you why. But first, two things:

1. Learning is wonderful, thrilling, maddening and rewarding. If you’re a journalist and want to see new worlds, let me encourage you to take a journey into code.

2. The site is right here and the code is here. It falls way short in the Awesome Dept., and it will not save journalism. But that’s not why I built it, really.

* * *

The tale began March 2009 in Indianapolis at the Investigative Reporters and Editors Computer-Assisted Reporting conference. That’s the annual data journalism hoedown that draws investigative journalists, app coders and academics for a couple of days of nerdish talk about finding and telling stories with data.
Continue…

Generate JSON From SQL Using Python

Let’s say you want to generate a few hundred — or even a thousand — flat JSON files from a SQL database. Maybe you want to power an interactive graphic but have neither the time nor the desire to spin up a server to dynamically generate the data. Or you think a server adds one more piece of unnecessary complexity and administrative headache. So, you want flat files, each one small for quick loading. And a lot of them.

A few lines of Python is all you need.

I’ve gone this route lately for a few data-driven interactives at USA TODAY, creating JSON files out of large data sets living in SQL Server. Python works well for this, with its JSON encoder/decoder offering a flexible set of tools for converting Python objects to JSON.

Here’s a brief tutorial:

1. If you haven’t already, install Python. Here’s my guide to setup on Windows 7; if you’re on Linux or Mac you should have it already.

2. In your Python script, import a database connector. This example uses pyodbc, which supports connections to SQL Server, MySQL, Microsoft Access and other databases. If you’re using PostgreSQL, try psycopg2.

3. Create a table or tables to query in your SQL database and write and test your query. In this example, I have a table called Students that has a few fields for each student. The query is simple:
 

SELECT ID, FirstName, LastName, Street, City, ST, Zip
FROM Students

4. Here’s an example script that generates two JSON files from that query. One file contains JSON row arrays, and the other JSON key-value objects. Below, we’ll walk through it step-by-step.
Continue…

‘Goshen’ WordPress Theme on Github

At the start of 2011, I simplified the first WordPress theme I’d built for this site and turned it into something far more minimalistic. I went from two sidebars to one, lost the bulky header and turned from color to black and white. Part of this was a desire for simplicity; part was my reaction to my lack of design sense. Color is not my strong suit, and I shouldn’t be caught trying to pretend.

Since then, I’ve made a few tweaks, but one thing I hadn’t done all year was post the theme — which I call Goshen — for anyone to use. Today I fixed that and pushed the files up to their own repository on Github. You can download the files and hack away. (In your WordPress install, under /wp-content/themes/, create a folder called Goshen and unzip the files there; then you can activate the theme via the dashboard.)

I’ll continue to tweak when I have time. I can’t say enough about how much WordPress theme hacking has taught me about HTML, CSS, templates and web design. If you want to start from scratch, I recommend this excellent tutorial. You’ll discover that WordPress themes have only a few moving parts. Mastering them will let you make your site exactly what you want it to be.

 

Scraping CDC flu data with Python

Getting my flu shot this week reminded me about weekly surveillance data the Centers for Disease Control and Prevention provides on flu prevalence across the nation. I’d been planning to do some Python training for my team at work, so it seemed like a natural to write a quick Python scraper that grabs the main table on the site and turns it into a delimited text file.

So I did, and I’m sharing. You can grab the code for the CDC-flu-scraper on Github.

The code uses the Mechanize and BeautifulSoup modules for web browsing and html parsing, respectively. Much of what I demonstrate here I started learning via Ben Welsh’s fine tutorial on web scraping.

We’re still early in flu season, but if you watch this data each week you’ll see the activity pick up quickly.

Update 10/22/2011: Ben Welsh has lent some contributions to this scraper, adding JSON output and turning it into a function. Benefits of social coding 101 …

Setting up Python in Windows 7

An all-wise journalist once told me that “everything is easier in Linux,” and after working with it for a few years I’d have to agree — especially when it comes to software setup for data journalism. But …

Many newsroom types spend the day in Windows without the option of Ubuntu or another Linux OS. I’ve been planning some training around Python soon, so I compiled this quick setup guide as a reference. I hope you find it helpful.

Set up Python on Windows 7

Get started:

1. Visit the official Python download page and grab the Windows installer. Choose the 32-bit version. A 64-bit version is available, but there are compatibility issues with some modules you may want to install later. (Thanks to commenters for pointing this out.)

Note: Python currently exists in two versions, the older 2.x series and newer 3.x series (for a discussion of the differences, see this). This tutorial focuses on the 2.x series.

2. Run the installer and accept all the default settings, including the “C:\Python27″ directory it creates.

Continue…

csvkit: A Swiss Army Knife for Comma-Delimited Files

If you’ve ever stared into the abyss of a big, uncooperative comma-delimited text file, it won’t take long to appreciate the value and potential of csvkit.

csvkit is a Python-based Swiss Army knife of utilities for dealing with, as its documentation says, “the king of tabular file formats.” It lets you examine, fix, slice, transform and otherwise master text-based data files (and not only the comma-delimited variety, as its name implies, but tab-delimited and fixed-width as well). Christopher Groskopf, lead developer on the Knight News Challenge-winning Panda project and recently a member of the Chicago Tribune’s news apps team, is the primary coder and architect, but the code’s hosted on Github and has a growing list of contributors.

As of version 0.3.0, csvkit comprises 11 utilities. The documentation describes them well, so rather than rehash it, here are highlights of three of the utilities I found interesting during a recent test drive:
Continue…

Some Favorite WordPress Plugins

With the 100-degree heat broiling the East Coast this weekend, I decided to stay inside and make some design and performance tweaks to my site. I added Google +1 buttons to posts and the index page, and I also tweaked some of the settings in my plugins.

Speaking of those, here’s what I’ve been using to make life easier:

Akismet: Gets rid of a ton of comment spam for various Russian “services” so I can spend my time doing other things. You’ll need to sign up for an API key, but otherwise it’s simple and effective.

Contact Form 7: After trying a few contact plugins, I settled on Contact Form 7 and have had great results. It powers my Contacts page, which I prefer to use instead of posting an email address. For spam filtering, I implemented the quiz feature, but the plugin also supports CAPTCHA. I rarely get spam.

Google XML Sitemaps: Generates a sitemap.xml file that Google and other search engines use to index the site. Lets me include or exclude content and control how often to update the file.
Continue…

Speed Up Your WordPress Site

WordPress is a great CMS, but if you’re hosting your own install you’ll want to pay attention to keeping its performance nimble. Here are a few tips I picked up this week after reading this post and thinking about how to apply those ideas to my own site:

1. Optimize the database.

WordPress handily saves every revision of every page and post you create. That’s great for retrieving a lost paragraph, but it can add hundreds of rows to the wp_posts table — especially if, like me, you save revisions often. Eventually, the database grows big enough to affect query performance, which can slow page loads.

There’s a remedy: Delete those unnecessary rows and optimize your tables.

The WordPress plugin directory lists several database optimizers, but they make me nervous. They all claim to optimize but don’t say exactly what changes they make. Instead, I dug straight into my site’s MySQL database using my web host’s phpMyAdmin interface.

Sanity check: Back up your database first.

Once that’s done, run this in the SQL pane:

DELETE FROM wp_posts WHERE post_type = "revision";

Next, use phpMyAdmin to optimize your tables (Structure Pane > Select All > choose Optimize Table from the drop down).

This trick shrank my database from 5.5MB to less than 1MB.

2. Install a caching plugin.

If something I write here goes viral, I’d like to avoid watching my server fry. One way to head this off is to set up caching so that every page load isn’t executing a query on your server.

Again, there is no shortage of caching plug-ins for WordPress. After checking comments on my web host’s user forums, I went with W3 Total Cache. Not only does it provide caching for pages, feeds and other content, it will minify CSS, HTML and JavaScript.

It loads like any WordPress plugin, but it has many options. This tutorial gives great step-by-step directions.

3. Remove unneeded PHP calls

Most WordPress themes, including the one I built for my site, include calls designed to fetch info from the MySQL database. The idea is to make the theme easily customizable via the admin and make the theme portable to millions of users.

But unless you plan to change your site’s name, tagline or url often, there’s no reason to have your theme fetch those every time it loads your header.

For example, in my header.php file, I changed this:

<h1><a href="<?php bloginfo('url'); ?>"><?php bloginfo('name'); ?></a></h1>

To this:

<h1><a href="http://www.anthonydebarros.com">Anthony DeBarros</a></h1>

That’s two fewer calls each time that page loads. Depending on your theme, you might be able to find many more calls to eliminate.

More reading:

There’s a lot more you can tweak under the hood, and that means plenty of chances to learn more goodness about server configurations. Here are some of the places where I picked up these tricks and where you can find a lot more detail. Happy optimizing!

http://yoast.com/speed-up-wordpress/
http://www.ardamis.com/2010/08/03/how-to-speed-up-wordpress/
http://wpgarage.com/tips/38-ways-to-optimize-and-speed-up-your-wordpress-blog/
http://www.gadgetcage.com/5-best-wordpress-plugins-to-improve-the-loading-speed-of-a-blog/8038/
http://mashable.com/2010/07/19/speed-up-wordpress/

USA TODAY launches public APIs

Here’s some news I’ve been itching to share for a few months: the launch of USA TODAY’s first public data APIs.

Announced today, the first two out of the gate offer access to our Best-Selling Books list and our archive of pro sports player salaries. Both data sets represent many years of effort by the database team at USA TODAY. To build these APIs, we were fortunate to team up with some great folks in our IT group. It’s been a great learning experience. We’re proud of the data and excited about opening it to the public.

Our new developer site has the details. You can register for API keys, read the documentation and post questions in forums. We want your feedback — it will help us make the APIs better and steer us as we make more of data available in the months ahead. Also be sure to read the terms of use.

About our first two APIs:

The Best-Selling Books API contains the weekly top 150 books from our list, including metadata about each title.

The Sports Salaries API contains annual salary information for players in Major League Baseball, the National Football League, National Basketball Association, and National Hockey League.

For more information, check out our official press release. There’s also a page up at programmableweb.com.

WordPress: Separate Comments, Trackbacks

I’ve been tinkering again with my site’s WordPress theme and wanted to address one of its weaknesses — the formatting of comments. In addition to adding Gravatars, I wanted to separate reader comments from automatically generated pingback/trackback posts.

A quick Google search turned up just the tutorial I needed from the design blog Hongkiat.com:

How to Separate Comments and Trackbacks [Wordpress Tips].

Calculating Medians With SQL

Given that median is such a valuable statistical measure, it’s baffling that Microsoft’s SQL Server and other relational databases (MySQL, PostgreSQL) don’t have a built-in MEDIAN function. Well, this week, after working through a data set in SQL Server — and deciding I didn’t want to push the data into SPSS to find medians — I hit the web to find a T-SQL workaround.

I found a ton of solutions (some from people with no clue about the difference between median and average), but the one below — adapted from a post by Adam Machanic at sqlblog.com — was the best. It produces accurate results and is fairly speedy to boot.

Here’s an example. Consider this table with student grades from two courses:
.

ID Class FirstName Grade
1 Math Bob 65
2 Math Joe 72
3 Math Sally 95
4 Science Bob 65
5 Science Joe 81
6 Science Sally 81
7 Science Mike 72

We’d like to find the median grade in each class. Here’s the script:
Continue…

Download My WordPress Theme

Update Jan. 1, 2011:

Starting today, I’m using a new theme I’ve crafted, which I am calling Goshen. It’s designed to be minimal but readable. The theme I was using previously, Porfolio_AD, is going to rest comfortably on a beach somewhere and probably won’t be coming out of retirement any time soon. You can still download it — I know of one person who actually did implement it on a site, much to my shock — and play to your heart’s content.

Original post:

I make no claim to having any design sense, but if you like the WordPress theme I made for my site, you’re welcome to it. Grab it right here.

Portfolio AD is a one-column theme with a two-column sidebar. Both columns in the sidebar can display widgets. The header is a modification of the well-worn Hemingway theme, and the rest is my take-off of a site built via this tutorial from WPDesigner.com. It works fine with WordPress 3.0.

What you see here is the result of constant tinkering. I coded up a theme of my own to learn more about what WordPress can do, and in the process I have learned tons about CSS, HTML, web hosting and content management systems in general. If you have the time, it’s well worth the investment.

Use the theme as a starting point for your own tinkering and have fun!


You Can’t Afford to Stop Learning

The late Jerry McBride, founder of the Marist College information systems program and mentor to many of us in it, gave our capstone class a bit of advice that I’ve always remembered:

“Never stop learning.”

Plenty of people, regardless of industry, argue otherwise. Play it safe, take few risks and stay with the tried and true — that keeps the bills paid and the lights on.

Decades ago, “No one ever got fired for buying IBM” was a phrase you could bank on. IBM had it all — the kings of the air-conditioned mainframe, making oodles of money, and very snug in their white-shirt-and-tie ways. But they were slow to learn. PCs came along, the mainframe business withered, OS/2 failed to unseat Windows, and tens of thousands of people in IBM-hometowns found themselves unemployed. IBM’s come back, but it’s nothing like the company it was in 1980.

Stop learning, rest on the existing models, and it’s easy to become a mainframe-hawker in a PC revolution. Or a railroad tycoon watching with disregard as Henry Ford mass produces Model T’s.

I’ve practiced Jerry McBride’s advice better at times than others. Lately, very much so. I’m on a learning jag. My latest quest is Ubuntu, Apache, PostgreSQL and Django. Last night, this little screen brought a smile or two:

P.S. My bookmarks and RSS feeds prove one thing: There’s no excuse for not learning; the Internet is the best free library you’ll ever find.