Quantcast
Channel: line – no free hunch
Viewing all articles
Browse latest Browse all 3

Getting Started with the WordPress Competition

$
0
0

Hey everyone,

I hope you've had a chance to take a look at the WordPress competition! It's a really neat problem, asking you to predict which blog posts people have liked based on which posts they've liked in the past, and carries a $20,000 purse. I've literally lost sleep over this.

The WordPress data is a little bit tricky to work with, however, so to help you get up and running, in this tutorial I'll show and explain the python code I used to make the "favorite blogs + LDA" benchmark. Feel free to use the code as a starting point for your own submissions! The major code snippets used in this tutorial are excerpted and slightly modified from code available here.

There are two main challenges of working with the WordPress data set. First of all, the data files are very large, totaling 6.1 GB uncompressed. Unless you have exceptional hardware at your disposal, you'll need to be careful to choose an algorithm that is both computationally feasible and memory-friendly. The code I'll show you below was run on one core of my laptop over about 6 hours.

The second challenge is that the blog posts you're given have very few features you can immediately use in a model. Aside from these few features, (like the blog the post comes from, its author, or when it was posted), you'll need to generate the rest of them yourself, a natural language processing (NLP) problem. The code I'll show uses a Latent Dirichlet Allocation (LDA) model to estimate which "topics" a post is about.

The decisions I've made in my benchmark code were guided by these two considerations. Before getting into the details, I'll describe what the benchmark code does. In broad outline, the benchmark code estimates which posts a user will like by first considering posts from blogs they have liked before. Since there are usually more than 100 such posts, (and we are required to recommend 100 posts), we choose the 100 that are semantically the most similar to the posts the user has liked before and come from the blogs the user has liked the most.

And now for the details: First of all, since the posts are given to us in json, and the post content is given in html, we need some functions for parsing these formats. I found a nice html-stripper here:

from HTMLParser import HTMLParser

# Tools for stripping html
class MLStripper(HTMLParser):
    def __init__(self):
        self.reset()
        self.fed = []
    def handle_data(self, d):
        self.fed.append(d)
    def get_data(self):
        return ''.join(self.fed)

def strip_tags(html):
    s = MLStripper()
    s.feed(html)
    return s.get_data()

You can use the html stripper like this:

>>> strip_tags(' <h1>foo <a href="www.bar.com">baz</a></h1> ')
'foo baz'

Python comes pre-equipped with a great json parser, which is as easy to use as:

>>> import json 
>>> f = open("kaggle-stats-blogs-20111123-20120423.json")
>>> json.loads(f.readline())
{u'blog_id': 4, u'num_likes': 781, u'num_posts': 204}

Having figured out how to parse our data formats, we turn our attention to the LDA. Briefly, Latent Dirichlet Allocation is an unsupervised semantics model that takes a corpus of documents--in this case, the blog posts--and estimates what "topics" they are about. A "topic" is a set of word frequencies, and a document is assumed to be composed of a mixture of topics. (Check out wikipedia for more detailed information). LDA often produces very intuitive results; in this case, for example, one of the topics was on the Trayvon Martin shooting, and another on Christianity.

I use the LDA implementation from the python gensim module. This implementation supports an "online" estimation of the topics, which means that we can feed the model chunks of blog posts at a time, (and don't need to load all of the posts into memory at once). To take advantage of online LDA, I build my own classes Files and Corp, which are used to iterate over posts, yielding the parsed content of thoses posts as a list of words:

from gensim import corpora
import string

# An object to read and parse files without
# loading them entirely into memory
class Files():
    def __init__(self, files):
        self.files = files
    def __enter__(self):
        return self
    
    def __exit__(self, exc_type, exc_value, traceback):
        self.close()

    # Read only one line at a time from the 
    # text files, to be memory friendly
    def __iter__(self):
        for f in self.files:
            # Reset the file pointer before a new iteration
            f.seek(0)
            for line in f:
                post = json.loads(line)
                content = post["content"]
                doc_words = []
                # parse and split the content up into
                # a list of lower-case words
                try: 
                    doc_words = strip_tags(content).encode('ascii',
                    'ignore').translate(string.maketrans("",""), 
                    string.punctuation).lower().split()
                except: # Fails on some nasty unicode
                    doc_words = []
                yield doc_words
    def __len__(self):
        n = 0
        for f in self.files:
            f.seek(0)
            for line in f:
                n += 1
        return n
    def close(self):
        for f in self.files:
            f.close()

# A helper class, for use in 
# gensim's LDA implementation
class Corp():
    def __init__(self, files, dic):
        self.files = files
        self.dic = dic
    def __iter__(self):
        for doc in self.files:
            yield self.dic.doc2bow(doc)
    def __len__(self):
        return len(self.files)

Our final step before actually beginning to work on our data is to define some stop words, that is, words that are so common that they don't help us distinguish between topics. We will not consider these words when training the LDA model.

# These are words that will be removed from posts, due to their 
# frequency and poor utility in distinguishing between topics
stop_words = ["a","able","about","across","after","all","almost",
"also","am","among","an","and","any","are","as","at","be","because",
"been","but","by","can","cannot","could","did","do","does","either",
"else","ever","every","for","from","get","got","had","has","have",
"he","her","hers","him","his","how","however","i","if","in","into",
"is","it","its","just","least","let","like","may","me","might","most",
"must","my","neither","no","nor","not","of","off","often","on","only",
"or","other","our","own","rather","said","say","says","she","should",
"since","so","some","than","that","the","their","them","then","there",
"these","they","this","to","too","us","wants","was","we","were",
"what","when","where","which","while","who","whom","why","will",
"with","would","yet","you","your"]

We will use our first pass over the data to generate the dictionary of "words" used in the blog posts. Because of imperfect parsing, some of these "words" are things like "www" that aren't truly words, but are nonetheless useful for determining what a blog post is about. At this point, the code begins to take a while to run, so we occasionally save our work and attempt to load it in the try/except blocks.

from __future__ import division
from collections import defaultdict
from Corp import stop_words, Files, Corp
from gensim import corpora, models, similarities
import logging
import json
import cPickle
import random

logging.basicConfig(format='%(asctime)s : %(levelname)s : %(message)s', level=logging.INFO)

# First, we make a dictionary of words used in the posts
with Files([open("../trainPosts.json"), open("../testPosts.json")]) as myFiles:
    try: 
        dictionary = corpora.dictionary.Dictionary.load("dictionary.saved")
    except:
        dictionary = corpora.Dictionary(doc for doc in myFiles)
        stop_ids = [dictionary.token2id[stopword] for stopword in stop_words if stopword in dictionary.token2id]
        infreq_ids = [tokenid for tokenid, docfreq in dictionary.dfs.iteritems() if docfreq < 50]
        dictionary.filter_tokens(stop_ids + infreq_ids) # remove stop words and words that appear infrequently
        dictionary.compactify() # remove gaps in id sequence after words that were removed

        dictionary.save("dictionary.saved")

Next, we train the LDA model with the blog posts, estimating 100 topics.

try:
        lda = models.ldamodel.LdaModel.load("lda.saved") 
    except:
        lda = models.ldamodel.LdaModel(corpus=Corp(myFiles, dictionary), id2word=dictionary, num_topics=100, update_every=1, chunksize=10000, passes=1)

        lda.save("lda.saved")

Now, we do some quick preliminary work to determine which blogs have which posts, and to map post_id's to a zero-based index, or vice versa

trainPostIndices = {}
blogTrainPosts = defaultdict(list)
with open("../trainPostsThin.json") as f:
    for i, line in enumerate(f):
        post = json.loads(line)
        blog_id = post["blog"]
        post_id = post["post_id"]
        trainPostIndices[post_id] = i
        blogTrainPosts[blog_id].append(post_id)

logging.info("Done doing preliminary training data processing")

testPostIds = []
testPostIndices = {}
blogTestPosts = defaultdict(list)
with open("../testPostsThin.json") as f:
    for i, line in enumerate(f):
        post = json.loads(line)
        blog_id = post["blog"]
        post_id = post["post_id"]
        testPostIds.append(post_id)
        testPostIndices[post_id] = i
        blogTestPosts[blog_id].append(post_id)

logging.info("Done doing preliminary test data processing")

 

We now estimate the test post topic distributions. This distribution is represented by a 100 dimensional (sparse) vector, one for each post, which indicates the likelihood that a word from a given post will belong to a given topic. We then construct a lookup-index of these test post vectors, for quick answers to questions about what test posts are similar to a given training post. The similarity measure between two posts is defined to be the cosine of the angle between their topic distribution vectors, like the correlation except that we do not subtract the mean. Since the similiarity measure is the cosine of an angle, it is always between -1 and 1.

logging.info("Making the test lookup index...")

try:
    testVecs = cPickle.load(open("TestVecs.saved", "r"))
    testIndex = similarities.Similarity.load("TestIndex.saved")
except:
    with Files([open("../testPosts.json")]) as myFilesTest:
        myCorpTest = Corp(myFilesTest, dictionary)
        testVecs = [vec for vec in lda[myCorpTest]]
        testIndex = similarities.Similarity("./simDump/", testVecs, num_features=100)
        testIndex.num_best = 100
    cPickle.dump(testVecs, open("TestVecs.saved", "w"))
    testIndex.save("TestIndex.saved")

logging.info("Done making the test lookup index")

 

We estimate the training topic vectors, which we can hold in memory since they are sparsely coded in gensim. This is purely a matter of convenience; if this were too onerous a requirement on memory, we could estimate the training topics on the fly.

logging.info("Estimating the training topics...")

try:
    TrainVecs = cPickle.load(open("TrainVecs.saved", "r"))
except:
    with Files([open("../trainPosts.json")]) as myFilesTrain:
        myCorpTrain = Corp(myFilesTrain, dictionary)
        trainVecs = [vec for vec in lda[myCorpTrain]]
    cPickle.dump(trainVecs, open("TrainVecs.saved", "w"))

logging.info("Done estimating the training topics")

 

Finally, we begin making submissions. As you'll recall, we only consider posts from blogs the user has liked before. To rank the test posts from these blogs, we score them as follows: Every post gets a score that is the sum of a "blog score" and a "semantic score". The blog score is equal to the fraction of posts the user liked in the training set from a given blog out of all the posts that blog published, weighted by a "blog_weight", in this case 2.0. The semantic score is equal to the greatest semantic similarity between the post in question and the posts the user liked in the train period.

As an example, suppose we wished to score this blog post for a given user. Suppose the user had liked 8 out of 13 blog posts from the "Kaggle" blog in the test period, and that the closest semantic similarity between this post and any of the 15 posts the user liked in the training period was 0.93. Then this post would be scored as 2.0 * 8/13 + 0.93 = 2.16.

logging.info("Beginning to make submissions")
with open("../trainUsers.json", "r") as users, open("submissions.csv", "w") as submissions:
    submissions.write("\"posts\"\n")
    for user_total, line in enumerate(users):
        user = json.loads(line)
        if not user["inTestSet"]:
            continue

        blog_weight = 2.0
        posts = defaultdict(int) # The potential posts to recommend and their scores

        liked_blogs = [like["blog"] for like in user["likes"]]
        for blog_id in liked_blogs:
            for post_id in blogTestPosts[blog_id]:
                posts[post_id] += blog_weight / len(blogTestPosts[blog_id])
        # After this, posts[post_id] = (# times blog of post_id was liked by user in training) / (# posts from blog of post_id in training)
        posts_indices = [testPostIndices[post_id] for post_id in posts]
        posts_vecs = [testVecs[i] for i in posts_indices]

        liked_post_indices = []
        for like in user["likes"]:
            try: # For whatever reason, there is a slight mismatch between posts liked by users in trainUsers.json, and posts appearing in trainPosts.json
                liked_post_indices.append(trainPostIndices[like["post_id"]])
            except:
                logging.warning("Bad index!")

        total_likes = len(liked_post_indices)
        sample_size = min(10, total_likes)
        liked_post_indices = random.sample(liked_post_indices, sample_size) # to cut down computation time
        liked_post_vecs = [trainVecs[i] for i in liked_post_indices]
        likedPostIndex = similarities.SparseMatrixSimilarity(liked_post_vecs, num_terms=100)

        for posts_index, similar in zip(posts_indices, likedPostIndex[posts_vecs]):
            posts[testPostIds[posts_index]] += max([rho for rho in similar])
        # ie, posts[post_id] += max(semantic similarities to sample of previously liked posts)

 

If there are less than 100 test posts from blogs the user has previously liked, we fill up remaining spaces with posts semantically similar to previously liked posts, (almost always from different blogs).

if len(posts) < 100:
  similar_posts_ids  = [(testPostIds[i], rho) for similar100 in testIndex[liked_post_vecs] for i, rho in similar100]
    for post_id, rho in similar_posts_ids:
    posts[post_id] += rho / sample_size
    # dividing by the sample size ensures that the biggest additional score a post could get from this is 1.0

 

Finally, we pick the top 100 blogs, (or less if that's the case), and write them to our submissions file!

recommendedPosts = list(sorted(posts, key=posts.__getitem__, reverse=True))
  output = " ".join(recommendedPosts[:100]) + "\n"
  submissions.write(output)

  if user_total % 100 == 0:
    logging.info("User " + str(user_total) + " out of 16262")

 

Well, you've seen it all, now. As a result, you've no doubt seen that there's plenty of room for improvement! Maybe we should include more blog posts than just those from blogs the user has previously liked, or actually parse the html instead of stripping the tags, (at present, for example, image tags are removed), or come up with a more sophisticated recommendation system than the primitive scoring I've used. Whatever you do, I can't wait to see it!

Wishing you the best of luck,

Naftali Harris
Intern Commander-In-Chief

P.S. Thanks to Martin O'Leary for help on cleaning up my code!


Viewing all articles
Browse latest Browse all 3

Latest Images

Trending Articles





Latest Images