Lately I’ve been studying up on the math and technology associated with data science because there are so many interesting things going on. Despite taking many notes, I found myself learning certain important terms, seeing them again later, and then thinking “What was that again? P-values? Huh?”
While watching an excellent video about the pandas python data analysis library recently, I learned about how the University of Minnesota’s grouplens project has made a large amount of movie rating data from the movielens website available. Their download page lets you pull down 100,000, one million, ten million, or 100 million ratings, including data about the people doing the rating and the movies they rated.
Earlier this month I tweeted “When people write about AI like it’s this brand new thing, should I be amused, feel old, or both?” The tweet linked to a recent Harvard Business Review article called Data Scientists Don’t Scale about the things that Artificial Intelligence is currently doing, which just happened to be the things that the author of the article’s automated prose-generation company is doing.
In Spark Is the New Black in IBM Data Magazine, I recently wrote about how popular the Apache Spark framework is for both Hadoop and non-Hadoop projects these days, and how for many people it goes so far as to replace one of Hadoop’s fundamental components: MapReduce. (I still have trouble writing “Spar” without writing “ql” after it.) While waiting for that piece to be copyedited, I came across 5 Reasons Why Spark Matters to Business by my old XML.com editor Edd…
Note: I wrote this blog entry to accompany the IBM Data Magazine piece mentioned in the first paragraph, so for people following the link from there this goes into a little more detail on what RDF, triples, and SPARQL are than I normally would on this blog. I hope that readers already familiar with these standards will find the parts about doing the inferencing on a Hadoop cluster interesting.