Reading/Writing to Nebula Graph through PySpark #5650
Replies: 3 comments 4 replies
-
It's actually already supported, if you look into the nebula spark connector repo. |
Beta Was this translation helpful? Give feedback.
-
Also, it's highly recommended to leverage meta/storage to scan the data instead of doing so in query based way. It'll be more scalable/performant in large scale of data touched. In k8s, this could be done by put spark in same namespace of nebulagraph. |
Beta Was this translation helpful? Give feedback.
-
@wey-gu - You mentioned - "In k8s, this could be done by put spark in same namespace of nebulagraph". But we are running ETL Pipeline and Nebula Graph DB on different Cluster. |
Beta Was this translation helpful? Give feedback.
-
I need to develop ETL Pipeline in PySpark to write data in Nebula Graph. I have seen example of PySpark with Nebula Spark Connector. But code sample is using metaAddress only.
We have deployed Nebula Graph in K8S in different cluster. We have exposed only nebula-graphd-svc. Is there any way to access Nebula Graph through nebula-graphd-svc only?
Please also share the code for reference as well.
Beta Was this translation helpful? Give feedback.
All reactions