Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Minor code cleanup #10

Merged
merged 1 commit into from
Aug 13, 2018
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
21 changes: 0 additions & 21 deletions pom.xml
Original file line number Diff line number Diff line change
Expand Up @@ -18,26 +18,6 @@
</properties>

<dependencies>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-core_2.10</artifactId>
<version>${spark.version}</version>
</dependency>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-sql_2.10</artifactId>
<version>${spark.version}</version>
</dependency>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-hive_2.10</artifactId>
<version>${spark.version}</version>
</dependency>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-graphx_2.10</artifactId>
<version>${spark.version}</version>
</dependency>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-mllib_2.10</artifactId>
Expand All @@ -63,7 +43,6 @@
<artifactId>scala-maven-plugin</artifactId>
<version>3.1.6</version>
<executions>

<execution>
<id>compile</id>
<goals>
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -139,7 +139,7 @@ class FMWithLBFGS(private var task: Int,

/**
* Encode the FMModel to a dense vector, with its first numFeatures * numFactors elements representing the
* factorization matrix v, sequential numFeaturs elements representing the one-way interactions weights w if k1 is
* factorization matrix v, sequential numFeatures elements representing the one-way interactions weights w if k1 is
* set to true, and the last element representing the intercept w0 if k0 is set to true.
* The factorization matrix v is initialized by Gaussinan(0, initStd).
* v : numFeatures * numFactors + w : [numFeatures] + w0 : [1]
Expand All @@ -165,7 +165,7 @@ class FMWithLBFGS(private var task: Int,


/**
* Create a FMModle from an encoded vector.
* Create a FMModel from an encoded vector.
*/
private def createModel(weights: Vector): FMModel = {

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -169,7 +169,7 @@ class FMWithSGD(private var task: Int,

/**
* Encode the FMModel to a dense vector, with its first numFeatures * numFactors elements representing the
* factorization matrix v, sequential numFeaturs elements representing the one-way interactions weights w if k1 is
* factorization matrix v, sequential numFeatures elements representing the one-way interactions weights w if k1 is
* set to true, and the last element representing the intercept w0 if k0 is set to true.
* The factorization matrix v is initialized by Gaussinan(0, initStd).
* v : numFeatures * numFactors + w : [numFeatures] + w0 : [1]
Expand All @@ -195,7 +195,7 @@ class FMWithSGD(private var task: Int,


/**
* Create a FMModle from an encoded vector.
* Create a FMModel from an encoded vector.
*/
private def createModel(weights: Vector): FMModel = {

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -107,13 +107,13 @@ object FMModel extends Loader[FMModel] {

// Create Parquet data.
val dataRDD: DataFrame = sc.parallelize(Seq(data), 1).toDF()
dataRDD.saveAsParquetFile(dataPath(path))
dataRDD.write.parquet(dataPath(path))
}

def load(sc: SparkContext, path: String): FMModel = {
val sqlContext = new SQLContext(sc)
// Load Parquet data.
val dataRDD = sqlContext.parquetFile(dataPath(path))
val dataRDD = sqlContext.read.parquet(dataPath(path))
// Check schema explicitly since erasure makes it hard to use match-case for checking.
checkSchema[Data](dataRDD.schema)
val dataArray = dataRDD.select("task", "factorMatrix", "weightVector", "intercept", "min", "max").take(1)
Expand Down