moustachio moustachio - 11 months ago 456
Python Question

Extract document-topic matrix from Pyspark LDA Model

I have successfully trained an LDA model in spark, via the Python API:

from pyspark.mllib.clustering import LDA

This works completely fine, but I now need the document-topic matrix for the LDA model, but as far as I can tell all I can get is the word-topic, using

Is there some way to get the document-topic matrix from the LDA model, and if not, is there an alternative method (other than implementing LDA from scratch) in Spark to run an LDA model that will give me the result I need?


After digging around a bit, I found the documentation for DistributedLDAModel in the Java api, which has a
that I think is just what I need here (but I'm 100% sure if the LDAModel in Pyspark is in fact a DistributedLDAModel under the hood...).

In any case, I am able to indirectly call this method like so, without any overt failures:

In [127]:'topicDistributions')
Out[127]: MapPartitionsRDD[3156] at mapPartitions at PythonMLLibAPI.scala:1480

But if I actually look at the results, all I get are string telling me that the result is actually a Scala tuple (I think):

In [128]:'topicDistributions').take(5)
[{u'__class__': u'scala.Tuple2'},
{u'__class__': u'scala.Tuple2'},
{u'__class__': u'scala.Tuple2'},
{u'__class__': u'scala.Tuple2'},
{u'__class__': u'scala.Tuple2'}]

Maybe this is generally the right approach, but is there way to get the actual results?


After extensive research, this is definitely not possible via the Python api on the current version of Spark (1.5.1). But in Scala, it's fairly straightforward (given an RDD documents on which to train):

import org.apache.spark.mllib.clustering.{LDA, DistributedLDAModel}

// first generate RDD of documents...

val numTopics = 10
val lda = new LDA().setK(numTopics).setMaxIterations(10)
val ldaModel =

# then convert to distributed LDA model
val distLDAModel = ldaModel.asInstanceOf[DistributedLDAModel]

Then getting the document topic distributions is as simple as: