Skip to content

Commit

Permalink
chore: remove all refs to azure CDN
Browse files Browse the repository at this point in the history
  • Loading branch information
mhamilton723 committed Dec 13, 2024
1 parent 4a6a041 commit e69b471
Show file tree
Hide file tree
Showing 73 changed files with 207 additions and 207 deletions.
2 changes: 1 addition & 1 deletion CONTRIBUTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -56,7 +56,7 @@ this process:
case of your algorithm, with instructions in step-by-step manner. (The same
notebook could be used for testing the code.)
- Add in-line ScalaDoc comments to your source code, to generate the [API
reference documentation](https://mmlspark.azureedge.net/docs/pyspark/)
reference documentation](https://mmlspark.blob.core.windows.net/docs/pyspark/)

#### Open a pull request

Expand Down
10 changes: 5 additions & 5 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
![SynapseML](https://mmlspark.azureedge.net/icons/mmlspark.svg)
![SynapseML](https://mmlspark.blob.core.windows.net/icons/mmlspark.svg)

# Synapse Machine Learning

Expand Down Expand Up @@ -97,7 +97,7 @@ In Microsoft Fabric notebooks SynapseML is already installed. To change the vers
"name": "synapseml",
"conf": {
"spark.jars.packages": "com.microsoft.azure:synapseml_2.12:<THE_SYNAPSEML_VERSION_YOU_WANT>",
"spark.jars.repositories": "https://mmlspark.azureedge.net/maven",
"spark.jars.repositories": "https://mmlspark.blob.core.windows.net/maven",
"spark.jars.excludes": "org.scala-lang:scala-reflect,org.apache.spark:spark-tags_2.12,org.scalactic:scalactic_2.12,org.scalatest:scalatest_2.12,com.fasterxml.jackson.core:jackson-databind",
"spark.yarn.user.classpath.first": "true",
"spark.sql.parquet.enableVectorizedReader": "false"
Expand All @@ -120,7 +120,7 @@ In Azure Synapse notebooks please place the following in the first cell of your
"name": "synapseml",
"conf": {
"spark.jars.packages": "com.microsoft.azure:synapseml_2.12:1.0.8",
"spark.jars.repositories": "https://mmlspark.azureedge.net/maven",
"spark.jars.repositories": "https://mmlspark.blob.core.windows.net/maven",
"spark.jars.excludes": "org.scala-lang:scala-reflect,org.apache.spark:spark-tags_2.12,org.scalactic:scalactic_2.12,org.scalatest:scalatest_2.12,com.fasterxml.jackson.core:jackson-databind",
"spark.yarn.user.classpath.first": "true",
"spark.sql.parquet.enableVectorizedReader": "false"
Expand All @@ -136,7 +136,7 @@ In Azure Synapse notebooks please place the following in the first cell of your
"name": "synapseml",
"conf": {
"spark.jars.packages": "com.microsoft.azure:synapseml_2.12:0.11.4-spark3.3",
"spark.jars.repositories": "https://mmlspark.azureedge.net/maven",
"spark.jars.repositories": "https://mmlspark.blob.core.windows.net/maven",
"spark.jars.excludes": "org.scala-lang:scala-reflect,org.apache.spark:spark-tags_2.12,org.scalactic:scalactic_2.12,org.scalatest:scalatest_2.12,com.fasterxml.jackson.core:jackson-databind",
"spark.yarn.user.classpath.first": "true",
"spark.sql.parquet.enableVectorizedReader": "false"
Expand All @@ -156,7 +156,7 @@ coordinates](https://docs.databricks.com/user-guide/libraries.html#libraries-fro
in your workspace.

For the coordinates use: `com.microsoft.azure:synapseml_2.12:1.0.8`
with the resolver: `https://mmlspark.azureedge.net/maven`. Ensure this library is
with the resolver: `https://mmlspark.blob.core.windows.net/maven`. Ensure this library is
attached to your target cluster(s).

Finally, ensure that your Spark cluster has at least Spark 3.2 and Scala 2.12. If you encounter Netty dependency issues please use DBR 10.1.
Expand Down
2 changes: 1 addition & 1 deletion core/src/main/python/synapse/ml/core/init_spark.py
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ def init_spark():
+ __spark_package_version__
+ ",org.apache.spark:spark-avro_2.12:3.4.1",
)
.config("spark.jars.repositories", "https://mmlspark.azureedge.net/maven")
.config("spark.jars.repositories", "https://mmlspark.blob.core.windows.net/maven")
.config("spark.executor.heartbeatInterval", "60s")
.config("spark.sql.shuffle.partitions", 10)
.config("spark.sql.crossJoin.enabled", "true")
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@
from pyspark.ml.param.shared import *
from synapse.ml.core.schema.Utils import *

DEFAULT_URL = "https://mmlspark.azureedge.net/datasets/CNTKModels/"
DEFAULT_URL = "https://mmlspark.blob.core.windows.net/datasets/CNTKModels/"


class ModelSchema:
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ import com.microsoft.azure.synapse.ml.build.BuildInfo
* Centralized values for package repositories and coordinates (mostly used by test pipeline frameworks)
*/
object PackageUtils {
private val SparkMLRepository = "https://mmlspark.azureedge.net/maven"
private val SparkMLRepository = "https://mmlspark.blob.core.windows.net/maven"
private val SonatypeSnapshotsRepository = "https://oss.sonatype.org/content/repositories/snapshots"

val ScalaVersionSuffix: String = BuildInfo.scalaVersion.split(".".toCharArray).dropRight(1).mkString(".")
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -43,7 +43,7 @@
"# \"name\": \"synapseml\",\n",
"# \"conf\": {\n",
"# \"spark.jars.packages\": \"com.microsoft.azure:synapseml_2.12:<THE_SYNAPSEML_VERSION_YOU_WANT>\",\n",
"# \"spark.jars.repositories\": \"https://mmlspark.azureedge.net/maven\",\n",
"# \"spark.jars.repositories\": \"https://mmlspark.blob.core.windows.net/maven\",\n",
"# \"spark.jars.excludes\": \"org.scala-lang:scala-reflect,org.apache.spark:spark-tags_2.12,org.scalactic:scalactic_2.12,org.scalatest:scalatest_2.12,com.fasterxml.jackson.core:jackson-databind\",\n",
"# \"spark.yarn.user.classpath.first\": \"true\",\n",
"# \"spark.sql.parquet.enableVectorizedReader\": \"false\"\n",
Expand Down
2 changes: 1 addition & 1 deletion docs/Explore Algorithms/Deep Learning/Getting Started.md
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,7 @@ pip install synapseml==1.0.8
An alternative is installing the SynapseML jar package in library management section, by adding:
```
Coordinate: com.microsoft.azure:synapseml_2.12:1.0.8
Repository: https://mmlspark.azureedge.net/maven
Repository: https://mmlspark.blob.core.windows.net/maven
```
:::note
If you install the jar package, follow the first two cells of this [sample](../Quickstart%20-%20Fine-tune%20a%20Vision%20Classifier#environment-setup----reinstall-horovod-based-on-new-version-of-pytorch)
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -35,7 +35,7 @@
"\n",
"1. In Cluster Libraries install from library source Maven:\n",
"Coordinates: com.microsoft.azure:synapseml_2.12:1.0.8\n",
"Repository: https://mmlspark.azureedge.net/maven\n",
"Repository: https://mmlspark.blob.core.windows.net/maven\n",
"\n",
"2. In Cluster Libraries install from PyPI the library called plotly"
],
Expand Down
12 changes: 6 additions & 6 deletions docs/Get Started/Install SynapseML.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@ SynapseML is already installed in Microsoft Fabric notebooks. To change the vers
"name": "synapseml",
"conf": {
"spark.jars.packages": "com.microsoft.azure:synapseml_2.12:<THE_SYNAPSEML_VERSION_YOU_WANT>",
"spark.jars.repositories": "https://mmlspark.azureedge.net/maven",
"spark.jars.repositories": "https://mmlspark.blob.core.windows.net/maven",
"spark.jars.excludes": "org.scala-lang:scala-reflect,org.apache.spark:spark-tags_2.12,org.scalactic:scalactic_2.12,org.scalatest:scalatest_2.12,com.fasterxml.jackson.core:jackson-databind",
"spark.yarn.user.classpath.first": "true",
"spark.sql.parquet.enableVectorizedReader": "false"
Expand All @@ -33,7 +33,7 @@ For Spark3.4 pools
"name": "synapseml",
"conf": {
"spark.jars.packages": "com.microsoft.azure:synapseml_2.12:1.0.8",
"spark.jars.repositories": "https://mmlspark.azureedge.net/maven",
"spark.jars.repositories": "https://mmlspark.blob.core.windows.net/maven",
"spark.jars.excludes": "org.scala-lang:scala-reflect,org.apache.spark:spark-tags_2.12,org.scalactic:scalactic_2.12,org.scalatest:scalatest_2.12,com.fasterxml.jackson.core:jackson-databind",
"spark.yarn.user.classpath.first": "true",
"spark.sql.parquet.enableVectorizedReader": "false"
Expand All @@ -48,7 +48,7 @@ For Spark3.3 pools:
"name": "synapseml",
"conf": {
"spark.jars.packages": "com.microsoft.azure:synapseml_2.12:0.11.4-spark3.3",
"spark.jars.repositories": "https://mmlspark.azureedge.net/maven",
"spark.jars.repositories": "https://mmlspark.blob.core.windows.net/maven",
"spark.jars.excludes": "org.scala-lang:scala-reflect,org.apache.spark:spark-tags_2.12,org.scalactic:scalactic_2.12,org.scalatest:scalatest_2.12,com.fasterxml.jackson.core:jackson-databind",
"spark.yarn.user.classpath.first": "true",
"spark.sql.parquet.enableVectorizedReader": "false"
Expand All @@ -66,7 +66,7 @@ import pyspark
spark = pyspark.sql.SparkSession.builder.appName("MyApp") \
# Use 0.11.4-spark3.3 version for Spark3.3 and 1.0.8 version for Spark3.4
.config("spark.jars.packages", "com.microsoft.azure:synapseml_2.12:1.0.8") \
.config("spark.jars.repositories", "https://mmlspark.azureedge.net/maven") \
.config("spark.jars.repositories", "https://mmlspark.blob.core.windows.net/maven") \
.getOrCreate()
import synapse.ml
```
Expand All @@ -77,7 +77,7 @@ If you're building a Spark application in Scala, add the following lines to
your `build.sbt`:

```scala
resolvers += "SynapseML" at "https://mmlspark.azureedge.net/maven"
resolvers += "SynapseML" at "https://mmlspark.blob.core.windows.net/maven"
// Use 0.11.4-spark3.3 version for Spark3.3 and 1.0.8 version for Spark3.4
libraryDependencies += "com.microsoft.azure" % "synapseml_2.12" % "1.0.8"
```
Expand Down Expand Up @@ -108,7 +108,7 @@ in your workspace.

For the coordinates use: `com.microsoft.azure:synapseml_2.12:1.0.8` for Spark3.4 Cluster and
`com.microsoft.azure:synapseml_2.12:0.11.4-spark3.3` for Spark3.3 Cluster;
Add the resolver: `https://mmlspark.azureedge.net/maven`. Ensure this library is
Add the resolver: `https://mmlspark.blob.core.windows.net/maven`. Ensure this library is
attached to your target cluster(s).

Finally, ensure that your Spark cluster has at least Spark 3.2 and Scala 2.12.
Expand Down
2 changes: 1 addition & 1 deletion docs/Reference/Contributor Guide.md
Original file line number Diff line number Diff line change
Expand Up @@ -65,7 +65,7 @@ this process:
case of your algorithm, with instructions in step-by-step manner. (The same
notebook could be used for testing the code.)
- Add in-line ScalaDoc comments to your source code, to generate the [API
reference documentation](https://mmlspark.azureedge.net/docs/pyspark/)
reference documentation](https://mmlspark.blob.core.windows.net/docs/pyspark/)

#### Open a pull request

Expand Down
14 changes: 7 additions & 7 deletions docs/Reference/R Setup.md
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@ To install the current SynapseML package for R, first install synapseml-core:

```R
...
devtools::install_url("https://mmlspark.azureedge.net/rrr/synapseml-core-0.11.0.zip")
devtools::install_url("https://mmlspark.blob.core.windows.net/rrr/synapseml-core-0.11.0.zip")
...
```

Expand All @@ -38,11 +38,11 @@ In other words:

```R
...
devtools::install_url("https://mmlspark.azureedge.net/rrr/synapseml-cognitive-0.11.0.zip")
devtools::install_url("https://mmlspark.azureedge.net/rrr/synapseml-deep-learning-0.11.0.zip")
devtools::install_url("https://mmlspark.azureedge.net/rrr/synapseml-lightgbm-0.11.0.zip")
devtools::install_url("https://mmlspark.azureedge.net/rrr/synapseml-opencv-0.11.0.zip")
devtools::install_url("https://mmlspark.azureedge.net/rrr/synapseml-vw-0.11.0.zip")
devtools::install_url("https://mmlspark.blob.core.windows.net/rrr/synapseml-cognitive-0.11.0.zip")
devtools::install_url("https://mmlspark.blob.core.windows.net/rrr/synapseml-deep-learning-0.11.0.zip")
devtools::install_url("https://mmlspark.blob.core.windows.net/rrr/synapseml-lightgbm-0.11.0.zip")
devtools::install_url("https://mmlspark.blob.core.windows.net/rrr/synapseml-opencv-0.11.0.zip")
devtools::install_url("https://mmlspark.blob.core.windows.net/rrr/synapseml-vw-0.11.0.zip")
...
```

Expand Down Expand Up @@ -120,7 +120,7 @@ and then use spark_connect with method = "databricks":

```R
install.packages("devtools")
devtools::install_url("https://mmlspark.azureedge.net/rrr/synapseml-1.0.8.zip")
devtools::install_url("https://mmlspark.blob.core.windows.net/rrr/synapseml-1.0.8.zip")
library(sparklyr)
library(dplyr)
sc <- spark_connect(method = "databricks")
Expand Down
2 changes: 1 addition & 1 deletion project/BlobMavenPlugin.scala
Original file line number Diff line number Diff line change
Expand Up @@ -39,7 +39,7 @@ object BlobMavenPlugin extends AutoPlugin {
| `${organization.value}:${moduleName.value}_${scalaBinaryVersion.value}:${version.value}`
|
|### Maven Resolver
| `https://mmlspark.azureedge.net/maven`
| `https://mmlspark.blob.core.windows.net/maven`
|""".stripMargin
}
)
Expand Down
2 changes: 1 addition & 1 deletion tools/docker/demo/init_notebook.py
Original file line number Diff line number Diff line change
Expand Up @@ -32,7 +32,7 @@
),
(
"spark.jars.repositories",
"https://mmlspark.azureedge.net/maven,https://mvnrepository.com/artifact/org.apache.hadoop/hadoop-azure,https://mvnrepository.com/artifact/com.microsoft.azure/azure-storage",
"https://mmlspark.blob.core.windows.net/maven,https://mvnrepository.com/artifact/org.apache.hadoop/hadoop-azure,https://mvnrepository.com/artifact/com.microsoft.azure/azure-storage",
),
],
)
Expand Down
Loading

0 comments on commit e69b471

Please sign in to comment.